Test Report: KVM_Linux_crio 18358

                    
                      2f1fe73fe0a81db98fd5a1fcfb9006c4b42c71ed:2024-03-11:33520
                    
                

Test fail (31/319)

Order failed test Duration
39 TestAddons/parallel/Ingress 155.32
53 TestAddons/StoppedEnableDisable 154.38
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 8.13
172 TestMutliControlPlane/serial/StopSecondaryNode 142.14
174 TestMutliControlPlane/serial/RestartSecondaryNode 61.08
176 TestMutliControlPlane/serial/RestartClusterKeepsNodes 386.4
179 TestMutliControlPlane/serial/StopCluster 142.17
239 TestMultiNode/serial/RestartKeepsNodes 309.74
241 TestMultiNode/serial/StopMultiNode 141.54
248 TestPreload 281.42
256 TestKubernetesUpgrade 384.9
284 TestPause/serial/SecondStartNoReconfiguration 61.81
322 TestStartStop/group/old-k8s-version/serial/FirstStart 291.81
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.16
350 TestStartStop/group/no-preload/serial/Stop 139.01
353 TestStartStop/group/embed-certs/serial/Stop 138.97
354 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 99.84
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
360 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
364 TestStartStop/group/old-k8s-version/serial/SecondStart 776.21
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.22
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.22
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.2
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.37
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 383.11
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 334.06
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 344.22
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 88.13
x
+
TestAddons/parallel/Ingress (155.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-118179 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-118179 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-118179 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a141ff9e-e505-4bd1-ac33-95eb2183ab84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a141ff9e-e505-4bd1-ac33-95eb2183ab84] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004458134s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-118179 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.014142228s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-118179 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.50
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-118179 addons disable ingress-dns --alsologtostderr -v=1: (1.150336008s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-118179 addons disable ingress --alsologtostderr -v=1: (7.959056917s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-118179 -n addons-118179
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-118179 logs -n 25: (1.391000203s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-991647                                                                     | download-only-991647 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| delete  | -p download-only-462238                                                                     | download-only-462238 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| delete  | -p download-only-924667                                                                     | download-only-924667 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| delete  | -p download-only-991647                                                                     | download-only-991647 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-409325 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC |                     |
	|         | binary-mirror-409325                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43807                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-409325                                                                     | binary-mirror-409325 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| addons  | enable dashboard -p                                                                         | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC |                     |
	|         | addons-118179                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC |                     |
	|         | addons-118179                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-118179 --wait=true                                                                | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:12 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-118179 addons                                                                        | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:12 UTC | 11 Mar 24 20:12 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-118179 ssh cat                                                                       | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:12 UTC | 11 Mar 24 20:12 UTC |
	|         | /opt/local-path-provisioner/pvc-3907be86-6656-46a8-8487-459ee24b4993_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-118179 addons disable                                                                | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:12 UTC | 11 Mar 24 20:13 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-118179 ip                                                                            | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:12 UTC | 11 Mar 24 20:12 UTC |
	| addons  | addons-118179 addons disable                                                                | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:12 UTC | 11 Mar 24 20:12 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-118179 addons disable                                                                | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:12 UTC | 11 Mar 24 20:12 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:12 UTC | 11 Mar 24 20:12 UTC |
	|         | -p addons-118179                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:13 UTC | 11 Mar 24 20:13 UTC |
	|         | addons-118179                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-118179 ssh curl -s                                                                   | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:13 UTC | 11 Mar 24 20:13 UTC |
	|         | addons-118179                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:13 UTC | 11 Mar 24 20:13 UTC |
	|         | -p addons-118179                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-118179 addons                                                                        | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:13 UTC | 11 Mar 24 20:13 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-118179 addons                                                                        | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:13 UTC | 11 Mar 24 20:13 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-118179 ip                                                                            | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:15 UTC | 11 Mar 24 20:15 UTC |
	| addons  | addons-118179 addons disable                                                                | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:15 UTC | 11 Mar 24 20:15 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-118179 addons disable                                                                | addons-118179        | jenkins | v1.32.0 | 11 Mar 24 20:15 UTC | 11 Mar 24 20:15 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:10:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:10:12.625249   18976 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:10:12.625458   18976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:10:12.625466   18976 out.go:304] Setting ErrFile to fd 2...
	I0311 20:10:12.625470   18976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:10:12.625628   18976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:10:12.626188   18976 out.go:298] Setting JSON to false
	I0311 20:10:12.626943   18976 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3162,"bootTime":1710184651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:10:12.626999   18976 start.go:139] virtualization: kvm guest
	I0311 20:10:12.629265   18976 out.go:177] * [addons-118179] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:10:12.630816   18976 notify.go:220] Checking for updates...
	I0311 20:10:12.630819   18976 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:10:12.632662   18976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:10:12.634040   18976 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:10:12.635278   18976 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:10:12.636435   18976 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:10:12.637716   18976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:10:12.639112   18976 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:10:12.669196   18976 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 20:10:12.670362   18976 start.go:297] selected driver: kvm2
	I0311 20:10:12.670381   18976 start.go:901] validating driver "kvm2" against <nil>
	I0311 20:10:12.670393   18976 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:10:12.671289   18976 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:10:12.671371   18976 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 20:10:12.685124   18976 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 20:10:12.685165   18976 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 20:10:12.685363   18976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:10:12.685417   18976 cni.go:84] Creating CNI manager for ""
	I0311 20:10:12.685426   18976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 20:10:12.685432   18976 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 20:10:12.685476   18976 start.go:340] cluster config:
	{Name:addons-118179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-118179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:10:12.685568   18976 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:10:12.687301   18976 out.go:177] * Starting "addons-118179" primary control-plane node in "addons-118179" cluster
	I0311 20:10:12.688565   18976 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:10:12.688597   18976 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 20:10:12.688608   18976 cache.go:56] Caching tarball of preloaded images
	I0311 20:10:12.688673   18976 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:10:12.688693   18976 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:10:12.689001   18976 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/config.json ...
	I0311 20:10:12.689025   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/config.json: {Name:mk3d58a29f36929959f1f32ce0c5e685e5947245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:12.689172   18976 start.go:360] acquireMachinesLock for addons-118179: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:10:12.689234   18976 start.go:364] duration metric: took 45.213µs to acquireMachinesLock for "addons-118179"
	I0311 20:10:12.689257   18976 start.go:93] Provisioning new machine with config: &{Name:addons-118179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-118179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:10:12.689315   18976 start.go:125] createHost starting for "" (driver="kvm2")
	I0311 20:10:12.691234   18976 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0311 20:10:12.691348   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:10:12.691390   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:10:12.704840   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0311 20:10:12.705215   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:10:12.705682   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:10:12.705706   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:10:12.706057   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:10:12.706278   18976 main.go:141] libmachine: (addons-118179) Calling .GetMachineName
	I0311 20:10:12.706448   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:10:12.706621   18976 start.go:159] libmachine.API.Create for "addons-118179" (driver="kvm2")
	I0311 20:10:12.706650   18976 client.go:168] LocalClient.Create starting
	I0311 20:10:12.706689   18976 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 20:10:12.757457   18976 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 20:10:12.915449   18976 main.go:141] libmachine: Running pre-create checks...
	I0311 20:10:12.915468   18976 main.go:141] libmachine: (addons-118179) Calling .PreCreateCheck
	I0311 20:10:12.915915   18976 main.go:141] libmachine: (addons-118179) Calling .GetConfigRaw
	I0311 20:10:12.916307   18976 main.go:141] libmachine: Creating machine...
	I0311 20:10:12.916319   18976 main.go:141] libmachine: (addons-118179) Calling .Create
	I0311 20:10:12.916454   18976 main.go:141] libmachine: (addons-118179) Creating KVM machine...
	I0311 20:10:12.917699   18976 main.go:141] libmachine: (addons-118179) DBG | found existing default KVM network
	I0311 20:10:12.918402   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:12.918260   18998 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0311 20:10:12.918427   18976 main.go:141] libmachine: (addons-118179) DBG | created network xml: 
	I0311 20:10:12.918442   18976 main.go:141] libmachine: (addons-118179) DBG | <network>
	I0311 20:10:12.918473   18976 main.go:141] libmachine: (addons-118179) DBG |   <name>mk-addons-118179</name>
	I0311 20:10:12.918487   18976 main.go:141] libmachine: (addons-118179) DBG |   <dns enable='no'/>
	I0311 20:10:12.918502   18976 main.go:141] libmachine: (addons-118179) DBG |   
	I0311 20:10:12.918516   18976 main.go:141] libmachine: (addons-118179) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0311 20:10:12.918524   18976 main.go:141] libmachine: (addons-118179) DBG |     <dhcp>
	I0311 20:10:12.918537   18976 main.go:141] libmachine: (addons-118179) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0311 20:10:12.918552   18976 main.go:141] libmachine: (addons-118179) DBG |     </dhcp>
	I0311 20:10:12.918628   18976 main.go:141] libmachine: (addons-118179) DBG |   </ip>
	I0311 20:10:12.918660   18976 main.go:141] libmachine: (addons-118179) DBG |   
	I0311 20:10:12.918678   18976 main.go:141] libmachine: (addons-118179) DBG | </network>
	I0311 20:10:12.918687   18976 main.go:141] libmachine: (addons-118179) DBG | 
	I0311 20:10:12.923654   18976 main.go:141] libmachine: (addons-118179) DBG | trying to create private KVM network mk-addons-118179 192.168.39.0/24...
	I0311 20:10:12.985645   18976 main.go:141] libmachine: (addons-118179) DBG | private KVM network mk-addons-118179 192.168.39.0/24 created
	I0311 20:10:12.985675   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:12.985595   18998 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:10:12.985689   18976 main.go:141] libmachine: (addons-118179) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179 ...
	I0311 20:10:12.985708   18976 main.go:141] libmachine: (addons-118179) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 20:10:12.985800   18976 main.go:141] libmachine: (addons-118179) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 20:10:13.209846   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:13.209731   18998 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa...
	I0311 20:10:13.329107   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:13.329013   18998 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/addons-118179.rawdisk...
	I0311 20:10:13.329131   18976 main.go:141] libmachine: (addons-118179) DBG | Writing magic tar header
	I0311 20:10:13.329142   18976 main.go:141] libmachine: (addons-118179) DBG | Writing SSH key tar header
	I0311 20:10:13.329152   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:13.329129   18998 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179 ...
	I0311 20:10:13.329267   18976 main.go:141] libmachine: (addons-118179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179
	I0311 20:10:13.329296   18976 main.go:141] libmachine: (addons-118179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 20:10:13.329305   18976 main.go:141] libmachine: (addons-118179) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179 (perms=drwx------)
	I0311 20:10:13.329316   18976 main.go:141] libmachine: (addons-118179) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 20:10:13.329322   18976 main.go:141] libmachine: (addons-118179) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 20:10:13.329331   18976 main.go:141] libmachine: (addons-118179) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 20:10:13.329339   18976 main.go:141] libmachine: (addons-118179) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 20:10:13.329346   18976 main.go:141] libmachine: (addons-118179) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 20:10:13.329352   18976 main.go:141] libmachine: (addons-118179) Creating domain...
	I0311 20:10:13.329371   18976 main.go:141] libmachine: (addons-118179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:10:13.329399   18976 main.go:141] libmachine: (addons-118179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 20:10:13.329414   18976 main.go:141] libmachine: (addons-118179) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 20:10:13.329423   18976 main.go:141] libmachine: (addons-118179) DBG | Checking permissions on dir: /home/jenkins
	I0311 20:10:13.329436   18976 main.go:141] libmachine: (addons-118179) DBG | Checking permissions on dir: /home
	I0311 20:10:13.329445   18976 main.go:141] libmachine: (addons-118179) DBG | Skipping /home - not owner
	I0311 20:10:13.330325   18976 main.go:141] libmachine: (addons-118179) define libvirt domain using xml: 
	I0311 20:10:13.330346   18976 main.go:141] libmachine: (addons-118179) <domain type='kvm'>
	I0311 20:10:13.330353   18976 main.go:141] libmachine: (addons-118179)   <name>addons-118179</name>
	I0311 20:10:13.330358   18976 main.go:141] libmachine: (addons-118179)   <memory unit='MiB'>4000</memory>
	I0311 20:10:13.330363   18976 main.go:141] libmachine: (addons-118179)   <vcpu>2</vcpu>
	I0311 20:10:13.330371   18976 main.go:141] libmachine: (addons-118179)   <features>
	I0311 20:10:13.330376   18976 main.go:141] libmachine: (addons-118179)     <acpi/>
	I0311 20:10:13.330387   18976 main.go:141] libmachine: (addons-118179)     <apic/>
	I0311 20:10:13.330392   18976 main.go:141] libmachine: (addons-118179)     <pae/>
	I0311 20:10:13.330409   18976 main.go:141] libmachine: (addons-118179)     
	I0311 20:10:13.330414   18976 main.go:141] libmachine: (addons-118179)   </features>
	I0311 20:10:13.330421   18976 main.go:141] libmachine: (addons-118179)   <cpu mode='host-passthrough'>
	I0311 20:10:13.330426   18976 main.go:141] libmachine: (addons-118179)   
	I0311 20:10:13.330440   18976 main.go:141] libmachine: (addons-118179)   </cpu>
	I0311 20:10:13.330448   18976 main.go:141] libmachine: (addons-118179)   <os>
	I0311 20:10:13.330454   18976 main.go:141] libmachine: (addons-118179)     <type>hvm</type>
	I0311 20:10:13.330467   18976 main.go:141] libmachine: (addons-118179)     <boot dev='cdrom'/>
	I0311 20:10:13.330474   18976 main.go:141] libmachine: (addons-118179)     <boot dev='hd'/>
	I0311 20:10:13.330484   18976 main.go:141] libmachine: (addons-118179)     <bootmenu enable='no'/>
	I0311 20:10:13.330499   18976 main.go:141] libmachine: (addons-118179)   </os>
	I0311 20:10:13.330512   18976 main.go:141] libmachine: (addons-118179)   <devices>
	I0311 20:10:13.330521   18976 main.go:141] libmachine: (addons-118179)     <disk type='file' device='cdrom'>
	I0311 20:10:13.330532   18976 main.go:141] libmachine: (addons-118179)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/boot2docker.iso'/>
	I0311 20:10:13.330539   18976 main.go:141] libmachine: (addons-118179)       <target dev='hdc' bus='scsi'/>
	I0311 20:10:13.330547   18976 main.go:141] libmachine: (addons-118179)       <readonly/>
	I0311 20:10:13.330551   18976 main.go:141] libmachine: (addons-118179)     </disk>
	I0311 20:10:13.330560   18976 main.go:141] libmachine: (addons-118179)     <disk type='file' device='disk'>
	I0311 20:10:13.330569   18976 main.go:141] libmachine: (addons-118179)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 20:10:13.330581   18976 main.go:141] libmachine: (addons-118179)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/addons-118179.rawdisk'/>
	I0311 20:10:13.330603   18976 main.go:141] libmachine: (addons-118179)       <target dev='hda' bus='virtio'/>
	I0311 20:10:13.330616   18976 main.go:141] libmachine: (addons-118179)     </disk>
	I0311 20:10:13.330621   18976 main.go:141] libmachine: (addons-118179)     <interface type='network'>
	I0311 20:10:13.330630   18976 main.go:141] libmachine: (addons-118179)       <source network='mk-addons-118179'/>
	I0311 20:10:13.330637   18976 main.go:141] libmachine: (addons-118179)       <model type='virtio'/>
	I0311 20:10:13.330642   18976 main.go:141] libmachine: (addons-118179)     </interface>
	I0311 20:10:13.330649   18976 main.go:141] libmachine: (addons-118179)     <interface type='network'>
	I0311 20:10:13.330655   18976 main.go:141] libmachine: (addons-118179)       <source network='default'/>
	I0311 20:10:13.330665   18976 main.go:141] libmachine: (addons-118179)       <model type='virtio'/>
	I0311 20:10:13.330676   18976 main.go:141] libmachine: (addons-118179)     </interface>
	I0311 20:10:13.330748   18976 main.go:141] libmachine: (addons-118179)     <serial type='pty'>
	I0311 20:10:13.330791   18976 main.go:141] libmachine: (addons-118179)       <target port='0'/>
	I0311 20:10:13.330803   18976 main.go:141] libmachine: (addons-118179)     </serial>
	I0311 20:10:13.330810   18976 main.go:141] libmachine: (addons-118179)     <console type='pty'>
	I0311 20:10:13.330815   18976 main.go:141] libmachine: (addons-118179)       <target type='serial' port='0'/>
	I0311 20:10:13.330822   18976 main.go:141] libmachine: (addons-118179)     </console>
	I0311 20:10:13.330827   18976 main.go:141] libmachine: (addons-118179)     <rng model='virtio'>
	I0311 20:10:13.330838   18976 main.go:141] libmachine: (addons-118179)       <backend model='random'>/dev/random</backend>
	I0311 20:10:13.330870   18976 main.go:141] libmachine: (addons-118179)     </rng>
	I0311 20:10:13.330890   18976 main.go:141] libmachine: (addons-118179)     
	I0311 20:10:13.330901   18976 main.go:141] libmachine: (addons-118179)     
	I0311 20:10:13.330910   18976 main.go:141] libmachine: (addons-118179)   </devices>
	I0311 20:10:13.330933   18976 main.go:141] libmachine: (addons-118179) </domain>
	I0311 20:10:13.330941   18976 main.go:141] libmachine: (addons-118179) 
	I0311 20:10:13.336694   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:26:30:1b in network default
	I0311 20:10:13.337203   18976 main.go:141] libmachine: (addons-118179) Ensuring networks are active...
	I0311 20:10:13.337221   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:13.337743   18976 main.go:141] libmachine: (addons-118179) Ensuring network default is active
	I0311 20:10:13.338050   18976 main.go:141] libmachine: (addons-118179) Ensuring network mk-addons-118179 is active
	I0311 20:10:13.338442   18976 main.go:141] libmachine: (addons-118179) Getting domain xml...
	I0311 20:10:13.339012   18976 main.go:141] libmachine: (addons-118179) Creating domain...
	I0311 20:10:14.687444   18976 main.go:141] libmachine: (addons-118179) Waiting to get IP...
	I0311 20:10:14.688319   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:14.688668   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:14.688703   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:14.688662   18998 retry.go:31] will retry after 275.360165ms: waiting for machine to come up
	I0311 20:10:14.965094   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:14.965522   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:14.965583   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:14.965497   18998 retry.go:31] will retry after 341.068118ms: waiting for machine to come up
	I0311 20:10:15.307851   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:15.308213   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:15.308241   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:15.308179   18998 retry.go:31] will retry after 446.951763ms: waiting for machine to come up
	I0311 20:10:15.756714   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:15.757134   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:15.757225   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:15.757093   18998 retry.go:31] will retry after 453.129296ms: waiting for machine to come up
	I0311 20:10:16.211687   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:16.212010   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:16.212039   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:16.211965   18998 retry.go:31] will retry after 671.566255ms: waiting for machine to come up
	I0311 20:10:16.884773   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:16.885276   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:16.885300   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:16.885229   18998 retry.go:31] will retry after 759.922737ms: waiting for machine to come up
	I0311 20:10:17.647263   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:17.647651   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:17.647673   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:17.647613   18998 retry.go:31] will retry after 985.360667ms: waiting for machine to come up
	I0311 20:10:18.634111   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:18.634572   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:18.634610   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:18.634541   18998 retry.go:31] will retry after 1.203078174s: waiting for machine to come up
	I0311 20:10:19.839836   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:19.840319   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:19.840342   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:19.840284   18998 retry.go:31] will retry after 1.557916086s: waiting for machine to come up
	I0311 20:10:21.400475   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:21.401007   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:21.401033   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:21.400962   18998 retry.go:31] will retry after 1.557072679s: waiting for machine to come up
	I0311 20:10:22.959823   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:22.960311   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:22.960335   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:22.960290   18998 retry.go:31] will retry after 2.46840098s: waiting for machine to come up
	I0311 20:10:25.429980   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:25.430386   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:25.430411   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:25.430337   18998 retry.go:31] will retry after 3.175476892s: waiting for machine to come up
	I0311 20:10:28.607219   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:28.607576   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:28.607605   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:28.607531   18998 retry.go:31] will retry after 3.061875991s: waiting for machine to come up
	I0311 20:10:31.672605   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:31.673128   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find current IP address of domain addons-118179 in network mk-addons-118179
	I0311 20:10:31.673157   18976 main.go:141] libmachine: (addons-118179) DBG | I0311 20:10:31.673081   18998 retry.go:31] will retry after 5.291420127s: waiting for machine to come up
	I0311 20:10:36.968881   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:36.969284   18976 main.go:141] libmachine: (addons-118179) Found IP for machine: 192.168.39.50
	I0311 20:10:36.969298   18976 main.go:141] libmachine: (addons-118179) Reserving static IP address...
	I0311 20:10:36.969307   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has current primary IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:36.969650   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find host DHCP lease matching {name: "addons-118179", mac: "52:54:00:ed:0e:83", ip: "192.168.39.50"} in network mk-addons-118179
	I0311 20:10:37.036713   18976 main.go:141] libmachine: (addons-118179) DBG | Getting to WaitForSSH function...
	I0311 20:10:37.036754   18976 main.go:141] libmachine: (addons-118179) Reserved static IP address: 192.168.39.50
	I0311 20:10:37.036776   18976 main.go:141] libmachine: (addons-118179) Waiting for SSH to be available...
	I0311 20:10:37.038651   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:37.038896   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179
	I0311 20:10:37.038925   18976 main.go:141] libmachine: (addons-118179) DBG | unable to find defined IP address of network mk-addons-118179 interface with MAC address 52:54:00:ed:0e:83
	I0311 20:10:37.039020   18976 main.go:141] libmachine: (addons-118179) DBG | Using SSH client type: external
	I0311 20:10:37.039043   18976 main.go:141] libmachine: (addons-118179) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa (-rw-------)
	I0311 20:10:37.039086   18976 main.go:141] libmachine: (addons-118179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:10:37.039103   18976 main.go:141] libmachine: (addons-118179) DBG | About to run SSH command:
	I0311 20:10:37.039117   18976 main.go:141] libmachine: (addons-118179) DBG | exit 0
	I0311 20:10:37.049269   18976 main.go:141] libmachine: (addons-118179) DBG | SSH cmd err, output: exit status 255: 
	I0311 20:10:37.049288   18976 main.go:141] libmachine: (addons-118179) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0311 20:10:37.049295   18976 main.go:141] libmachine: (addons-118179) DBG | command : exit 0
	I0311 20:10:37.049300   18976 main.go:141] libmachine: (addons-118179) DBG | err     : exit status 255
	I0311 20:10:37.049306   18976 main.go:141] libmachine: (addons-118179) DBG | output  : 
	I0311 20:10:40.049797   18976 main.go:141] libmachine: (addons-118179) DBG | Getting to WaitForSSH function...
	I0311 20:10:40.051941   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.052274   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.052285   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.052550   18976 main.go:141] libmachine: (addons-118179) DBG | Using SSH client type: external
	I0311 20:10:40.052596   18976 main.go:141] libmachine: (addons-118179) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa (-rw-------)
	I0311 20:10:40.052627   18976 main.go:141] libmachine: (addons-118179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:10:40.052647   18976 main.go:141] libmachine: (addons-118179) DBG | About to run SSH command:
	I0311 20:10:40.052665   18976 main.go:141] libmachine: (addons-118179) DBG | exit 0
	I0311 20:10:40.181037   18976 main.go:141] libmachine: (addons-118179) DBG | SSH cmd err, output: <nil>: 
	I0311 20:10:40.181307   18976 main.go:141] libmachine: (addons-118179) KVM machine creation complete!
	I0311 20:10:40.181611   18976 main.go:141] libmachine: (addons-118179) Calling .GetConfigRaw
	I0311 20:10:40.182121   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:10:40.182298   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:10:40.182459   18976 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 20:10:40.182475   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:10:40.183545   18976 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 20:10:40.183569   18976 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 20:10:40.183578   18976 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 20:10:40.183586   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:40.185540   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.185860   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.185886   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.186014   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:40.186159   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.186287   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.186411   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:40.186597   18976 main.go:141] libmachine: Using SSH client type: native
	I0311 20:10:40.186827   18976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0311 20:10:40.186842   18976 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 20:10:40.295881   18976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:10:40.295902   18976 main.go:141] libmachine: Detecting the provisioner...
	I0311 20:10:40.295912   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:40.298538   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.298925   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.298951   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.299113   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:40.299308   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.299456   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.299588   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:40.299756   18976 main.go:141] libmachine: Using SSH client type: native
	I0311 20:10:40.299908   18976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0311 20:10:40.299920   18976 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 20:10:40.413751   18976 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 20:10:40.413827   18976 main.go:141] libmachine: found compatible host: buildroot
	I0311 20:10:40.413837   18976 main.go:141] libmachine: Provisioning with buildroot...
	I0311 20:10:40.413843   18976 main.go:141] libmachine: (addons-118179) Calling .GetMachineName
	I0311 20:10:40.414061   18976 buildroot.go:166] provisioning hostname "addons-118179"
	I0311 20:10:40.414081   18976 main.go:141] libmachine: (addons-118179) Calling .GetMachineName
	I0311 20:10:40.414258   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:40.416408   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.416714   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.416782   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.416878   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:40.417047   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.417209   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.417357   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:40.417524   18976 main.go:141] libmachine: Using SSH client type: native
	I0311 20:10:40.417731   18976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0311 20:10:40.417744   18976 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-118179 && echo "addons-118179" | sudo tee /etc/hostname
	I0311 20:10:40.543993   18976 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-118179
	
	I0311 20:10:40.544014   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:40.546570   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.546853   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.546877   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.546989   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:40.547178   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.547384   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.547515   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:40.547667   18976 main.go:141] libmachine: Using SSH client type: native
	I0311 20:10:40.547821   18976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0311 20:10:40.547836   18976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-118179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-118179/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-118179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:10:40.672881   18976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:10:40.672910   18976 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:10:40.672930   18976 buildroot.go:174] setting up certificates
	I0311 20:10:40.672941   18976 provision.go:84] configureAuth start
	I0311 20:10:40.672952   18976 main.go:141] libmachine: (addons-118179) Calling .GetMachineName
	I0311 20:10:40.673208   18976 main.go:141] libmachine: (addons-118179) Calling .GetIP
	I0311 20:10:40.675464   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.675802   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.675831   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.675952   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:40.678043   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.678377   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.678398   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.678554   18976 provision.go:143] copyHostCerts
	I0311 20:10:40.678619   18976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:10:40.678756   18976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:10:40.678873   18976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:10:40.678966   18976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.addons-118179 san=[127.0.0.1 192.168.39.50 addons-118179 localhost minikube]
	I0311 20:10:40.812364   18976 provision.go:177] copyRemoteCerts
	I0311 20:10:40.812426   18976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:10:40.812446   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:40.815053   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.815379   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.815400   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.815580   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:40.815760   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.815911   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:40.816025   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:10:40.903921   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:10:40.929503   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 20:10:40.954432   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 20:10:40.982432   18976 provision.go:87] duration metric: took 309.478234ms to configureAuth
	I0311 20:10:40.982460   18976 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:10:40.982633   18976 config.go:182] Loaded profile config "addons-118179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:10:40.982700   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:40.985330   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.985663   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:40.985683   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:40.985888   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:40.986079   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.986225   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:40.986351   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:40.986500   18976 main.go:141] libmachine: Using SSH client type: native
	I0311 20:10:40.986706   18976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0311 20:10:40.986721   18976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:10:41.554439   18976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:10:41.554464   18976 main.go:141] libmachine: Checking connection to Docker...
	I0311 20:10:41.554475   18976 main.go:141] libmachine: (addons-118179) Calling .GetURL
	I0311 20:10:41.555702   18976 main.go:141] libmachine: (addons-118179) DBG | Using libvirt version 6000000
	I0311 20:10:41.557898   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.558254   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:41.558279   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.558428   18976 main.go:141] libmachine: Docker is up and running!
	I0311 20:10:41.558439   18976 main.go:141] libmachine: Reticulating splines...
	I0311 20:10:41.558445   18976 client.go:171] duration metric: took 28.851785804s to LocalClient.Create
	I0311 20:10:41.558482   18976 start.go:167] duration metric: took 28.851846062s to libmachine.API.Create "addons-118179"
	I0311 20:10:41.558495   18976 start.go:293] postStartSetup for "addons-118179" (driver="kvm2")
	I0311 20:10:41.558507   18976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:10:41.558527   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:10:41.558777   18976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:10:41.558802   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:41.560876   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.561209   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:41.561234   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.561332   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:41.561477   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:41.561649   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:41.561790   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:10:41.648376   18976 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:10:41.653072   18976 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:10:41.653098   18976 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:10:41.653175   18976 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:10:41.653209   18976 start.go:296] duration metric: took 94.706914ms for postStartSetup
	I0311 20:10:41.653247   18976 main.go:141] libmachine: (addons-118179) Calling .GetConfigRaw
	I0311 20:10:41.653803   18976 main.go:141] libmachine: (addons-118179) Calling .GetIP
	I0311 20:10:41.656206   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.656524   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:41.656550   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.656806   18976 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/config.json ...
	I0311 20:10:41.656956   18976 start.go:128] duration metric: took 28.967631883s to createHost
	I0311 20:10:41.656983   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:41.658783   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.659044   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:41.659063   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.659189   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:41.659361   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:41.659497   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:41.659638   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:41.659811   18976 main.go:141] libmachine: Using SSH client type: native
	I0311 20:10:41.659952   18976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0311 20:10:41.659964   18976 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:10:41.773588   18976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710187841.749936541
	
	I0311 20:10:41.773617   18976 fix.go:216] guest clock: 1710187841.749936541
	I0311 20:10:41.773627   18976 fix.go:229] Guest: 2024-03-11 20:10:41.749936541 +0000 UTC Remote: 2024-03-11 20:10:41.656967218 +0000 UTC m=+29.080257519 (delta=92.969323ms)
	I0311 20:10:41.773653   18976 fix.go:200] guest clock delta is within tolerance: 92.969323ms
	I0311 20:10:41.773661   18976 start.go:83] releasing machines lock for "addons-118179", held for 29.084414269s
	I0311 20:10:41.773688   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:10:41.773944   18976 main.go:141] libmachine: (addons-118179) Calling .GetIP
	I0311 20:10:41.776448   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.776833   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:41.776860   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.777038   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:10:41.777488   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:10:41.777641   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:10:41.777724   18976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:10:41.777767   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:41.777857   18976 ssh_runner.go:195] Run: cat /version.json
	I0311 20:10:41.777873   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:10:41.780129   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.780148   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.780473   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:41.780498   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.780523   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:41.780538   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:41.780636   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:41.780799   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:10:41.780818   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:41.780982   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:10:41.780997   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:41.781113   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:10:41.781111   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:10:41.781227   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:10:41.869795   18976 ssh_runner.go:195] Run: systemctl --version
	I0311 20:10:41.909921   18976 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:10:42.074255   18976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:10:42.082102   18976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:10:42.082163   18976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:10:42.100059   18976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 20:10:42.100076   18976 start.go:494] detecting cgroup driver to use...
	I0311 20:10:42.100142   18976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:10:42.119055   18976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:10:42.133826   18976 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:10:42.133872   18976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:10:42.148469   18976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:10:42.162814   18976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:10:42.287843   18976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:10:42.445466   18976 docker.go:233] disabling docker service ...
	I0311 20:10:42.445519   18976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:10:42.461315   18976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:10:42.474684   18976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:10:42.589330   18976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:10:42.711249   18976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:10:42.727520   18976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:10:42.747912   18976 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:10:42.747987   18976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:10:42.758813   18976 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:10:42.758873   18976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:10:42.769278   18976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:10:42.779742   18976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:10:42.790185   18976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:10:42.800805   18976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:10:42.810105   18976 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 20:10:42.810148   18976 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 20:10:42.823037   18976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:10:42.833525   18976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:10:42.961221   18976 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:10:43.103206   18976 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:10:43.103296   18976 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:10:43.108558   18976 start.go:562] Will wait 60s for crictl version
	I0311 20:10:43.108614   18976 ssh_runner.go:195] Run: which crictl
	I0311 20:10:43.112580   18976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:10:43.147622   18976 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:10:43.147730   18976 ssh_runner.go:195] Run: crio --version
	I0311 20:10:43.176210   18976 ssh_runner.go:195] Run: crio --version
	I0311 20:10:43.207829   18976 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:10:43.209181   18976 main.go:141] libmachine: (addons-118179) Calling .GetIP
	I0311 20:10:43.211692   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:43.211967   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:10:43.211989   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:10:43.212166   18976 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:10:43.216523   18976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:10:43.230323   18976 kubeadm.go:877] updating cluster {Name:addons-118179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-118179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 20:10:43.230453   18976 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:10:43.230494   18976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:10:43.269880   18976 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 20:10:43.269939   18976 ssh_runner.go:195] Run: which lz4
	I0311 20:10:43.274467   18976 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 20:10:43.279073   18976 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 20:10:43.279098   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 20:10:44.918149   18976 crio.go:444] duration metric: took 1.643702538s to copy over tarball
	I0311 20:10:44.918208   18976 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 20:10:47.812350   18976 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.894118658s)
	I0311 20:10:47.812377   18976 crio.go:451] duration metric: took 2.894203479s to extract the tarball
	I0311 20:10:47.812385   18976 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 20:10:47.855645   18976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:10:47.902979   18976 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:10:47.903006   18976 cache_images.go:84] Images are preloaded, skipping loading
	I0311 20:10:47.903016   18976 kubeadm.go:928] updating node { 192.168.39.50 8443 v1.28.4 crio true true} ...
	I0311 20:10:47.903133   18976 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-118179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-118179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:10:47.903229   18976 ssh_runner.go:195] Run: crio config
	I0311 20:10:47.953471   18976 cni.go:84] Creating CNI manager for ""
	I0311 20:10:47.953493   18976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 20:10:47.953504   18976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 20:10:47.953523   18976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-118179 NodeName:addons-118179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 20:10:47.953655   18976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-118179"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 20:10:47.953709   18976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:10:47.964307   18976 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 20:10:47.964356   18976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 20:10:47.974383   18976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0311 20:10:47.992626   18976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:10:48.010806   18976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0311 20:10:48.029663   18976 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0311 20:10:48.034446   18976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:10:48.049029   18976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:10:48.176262   18976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:10:48.195753   18976 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179 for IP: 192.168.39.50
	I0311 20:10:48.195782   18976 certs.go:194] generating shared ca certs ...
	I0311 20:10:48.195802   18976 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:48.195960   18976 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:10:48.414130   18976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt ...
	I0311 20:10:48.414155   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt: {Name:mk83becedcfc7e173eb7ecad6b1a880cc6f9b7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:48.414296   18976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key ...
	I0311 20:10:48.414307   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key: {Name:mk8d6233a436e99a85fe2d02311d2f7911d11dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:48.414384   18976 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:10:48.571300   18976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt ...
	I0311 20:10:48.571328   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt: {Name:mk52078c2e888dc61bbb89237692d3fc0c343651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:48.571474   18976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key ...
	I0311 20:10:48.571485   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key: {Name:mkddf8a3c1aea0027cf454604f177f2476ade3ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:48.571556   18976 certs.go:256] generating profile certs ...
	I0311 20:10:48.571610   18976 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.key
	I0311 20:10:48.571628   18976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt with IP's: []
	I0311 20:10:48.705820   18976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt ...
	I0311 20:10:48.705858   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: {Name:mk7030b5d516e8ea3eebfbb44b761cca95cc53ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:48.706025   18976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.key ...
	I0311 20:10:48.706037   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.key: {Name:mk572569cfff287d40d9f45bb88857efd6dbbac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:48.706124   18976 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.key.0a510725
	I0311 20:10:48.706143   18976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.crt.0a510725 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I0311 20:10:49.009649   18976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.crt.0a510725 ...
	I0311 20:10:49.009677   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.crt.0a510725: {Name:mk2ccc18c6cba1cefd07281b61eb84b03b7bb74d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:49.009827   18976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.key.0a510725 ...
	I0311 20:10:49.009840   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.key.0a510725: {Name:mk7e40674af8ea4b9d99680d8e5195a25569fb7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:49.009931   18976 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.crt.0a510725 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.crt
	I0311 20:10:49.010057   18976 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.key.0a510725 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.key
	I0311 20:10:49.010131   18976 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/proxy-client.key
	I0311 20:10:49.010149   18976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/proxy-client.crt with IP's: []
	I0311 20:10:49.248572   18976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/proxy-client.crt ...
	I0311 20:10:49.248599   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/proxy-client.crt: {Name:mk567dc2440faece38e6a15ad7cd9dd732522336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:49.248760   18976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/proxy-client.key ...
	I0311 20:10:49.248771   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/proxy-client.key: {Name:mkba40b51a6e6a842437b128911ba6c13f86d50c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:10:49.248948   18976 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:10:49.248980   18976 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:10:49.249002   18976 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:10:49.249025   18976 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:10:49.249628   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:10:49.277742   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:10:49.306797   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:10:49.332886   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:10:49.358976   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0311 20:10:49.385387   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 20:10:49.410682   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:10:49.436918   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 20:10:49.462086   18976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:10:49.487039   18976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 20:10:49.504376   18976 ssh_runner.go:195] Run: openssl version
	I0311 20:10:49.510194   18976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:10:49.521080   18976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:10:49.525553   18976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:10:49.525599   18976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:10:49.531290   18976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:10:49.541964   18976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:10:49.546222   18976 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 20:10:49.546268   18976 kubeadm.go:391] StartCluster: {Name:addons-118179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-118179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:10:49.546333   18976 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 20:10:49.546387   18976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 20:10:49.585657   18976 cri.go:89] found id: ""
	I0311 20:10:49.585714   18976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 20:10:49.596046   18976 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 20:10:49.609407   18976 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 20:10:49.630157   18976 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 20:10:49.630173   18976 kubeadm.go:156] found existing configuration files:
	
	I0311 20:10:49.630212   18976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 20:10:49.642810   18976 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 20:10:49.642897   18976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 20:10:49.657463   18976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 20:10:49.674412   18976 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 20:10:49.674468   18976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 20:10:49.686870   18976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 20:10:49.697208   18976 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 20:10:49.697264   18976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 20:10:49.707841   18976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 20:10:49.717217   18976 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 20:10:49.717260   18976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 20:10:49.727834   18976 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 20:10:49.780001   18976 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 20:10:49.780070   18976 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 20:10:49.902184   18976 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 20:10:49.902276   18976 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 20:10:49.902359   18976 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 20:10:50.124441   18976 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 20:10:50.127306   18976 out.go:204]   - Generating certificates and keys ...
	I0311 20:10:50.127408   18976 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 20:10:50.127499   18976 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 20:10:50.278758   18976 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 20:10:50.623417   18976 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 20:10:50.808164   18976 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 20:10:51.007804   18976 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 20:10:51.244682   18976 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 20:10:51.244933   18976 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-118179 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0311 20:10:51.305665   18976 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 20:10:51.305831   18976 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-118179 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0311 20:10:51.596117   18976 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 20:10:51.878877   18976 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 20:10:52.100474   18976 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 20:10:52.100610   18976 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 20:10:52.195425   18976 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 20:10:52.305380   18976 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 20:10:52.550129   18976 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 20:10:52.829744   18976 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 20:10:52.830379   18976 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 20:10:52.832789   18976 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 20:10:52.834690   18976 out.go:204]   - Booting up control plane ...
	I0311 20:10:52.834805   18976 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 20:10:52.837022   18976 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 20:10:52.837921   18976 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 20:10:52.853697   18976 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 20:10:52.853828   18976 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 20:10:52.854328   18976 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 20:10:52.995074   18976 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 20:10:58.994530   18976 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002383 seconds
	I0311 20:10:58.994679   18976 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 20:10:59.012410   18976 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 20:10:59.548643   18976 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 20:10:59.548904   18976 kubeadm.go:309] [mark-control-plane] Marking the node addons-118179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 20:11:00.065808   18976 kubeadm.go:309] [bootstrap-token] Using token: f3qo1w.uahuh6kium0gss0a
	I0311 20:11:00.067655   18976 out.go:204]   - Configuring RBAC rules ...
	I0311 20:11:00.067791   18976 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 20:11:00.078922   18976 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 20:11:00.096254   18976 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 20:11:00.099990   18976 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 20:11:00.105900   18976 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 20:11:00.111437   18976 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 20:11:00.129165   18976 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 20:11:00.340695   18976 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 20:11:00.487986   18976 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 20:11:00.488713   18976 kubeadm.go:309] 
	I0311 20:11:00.488824   18976 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 20:11:00.488858   18976 kubeadm.go:309] 
	I0311 20:11:00.488985   18976 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 20:11:00.488999   18976 kubeadm.go:309] 
	I0311 20:11:00.489037   18976 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 20:11:00.489107   18976 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 20:11:00.489178   18976 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 20:11:00.489186   18976 kubeadm.go:309] 
	I0311 20:11:00.489241   18976 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 20:11:00.489248   18976 kubeadm.go:309] 
	I0311 20:11:00.489311   18976 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 20:11:00.489321   18976 kubeadm.go:309] 
	I0311 20:11:00.489386   18976 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 20:11:00.489504   18976 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 20:11:00.489622   18976 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 20:11:00.489638   18976 kubeadm.go:309] 
	I0311 20:11:00.489752   18976 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 20:11:00.489868   18976 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 20:11:00.489879   18976 kubeadm.go:309] 
	I0311 20:11:00.489985   18976 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token f3qo1w.uahuh6kium0gss0a \
	I0311 20:11:00.490108   18976 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 20:11:00.490140   18976 kubeadm.go:309] 	--control-plane 
	I0311 20:11:00.490150   18976 kubeadm.go:309] 
	I0311 20:11:00.490246   18976 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 20:11:00.490255   18976 kubeadm.go:309] 
	I0311 20:11:00.490375   18976 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token f3qo1w.uahuh6kium0gss0a \
	I0311 20:11:00.490467   18976 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 20:11:00.492902   18976 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 20:11:00.493083   18976 cni.go:84] Creating CNI manager for ""
	I0311 20:11:00.493096   18976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 20:11:00.495748   18976 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 20:11:00.497101   18976 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 20:11:00.513282   18976 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 20:11:00.544068   18976 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 20:11:00.544160   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:00.544231   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-118179 minikube.k8s.io/updated_at=2024_03_11T20_11_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=addons-118179 minikube.k8s.io/primary=true
	I0311 20:11:00.750470   18976 ops.go:34] apiserver oom_adj: -16
	I0311 20:11:00.750508   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:01.251352   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:01.750978   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:02.251568   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:02.751372   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:03.250947   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:03.750741   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:04.250900   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:04.750725   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:05.250568   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:05.751406   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:06.251531   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:06.751234   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:07.251461   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:07.751190   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:08.251144   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:08.750697   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:09.250840   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:09.750634   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:10.251432   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:10.751224   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:11.250609   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:11.750660   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:12.250826   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:12.751121   18976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:11:12.844065   18976 kubeadm.go:1106] duration metric: took 12.299960731s to wait for elevateKubeSystemPrivileges
	W0311 20:11:12.844110   18976 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 20:11:12.844120   18976 kubeadm.go:393] duration metric: took 23.297859372s to StartCluster
	I0311 20:11:12.844140   18976 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:11:12.844277   18976 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:11:12.844806   18976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:11:12.845233   18976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 20:11:12.845249   18976 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:11:12.847962   18976 out.go:177] * Verifying Kubernetes components...
	I0311 20:11:12.845320   18976 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0311 20:11:12.845484   18976 config.go:182] Loaded profile config "addons-118179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:11:12.849390   18976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:11:12.849391   18976 addons.go:69] Setting default-storageclass=true in profile "addons-118179"
	I0311 20:11:12.849402   18976 addons.go:69] Setting gcp-auth=true in profile "addons-118179"
	I0311 20:11:12.849414   18976 addons.go:69] Setting metrics-server=true in profile "addons-118179"
	I0311 20:11:12.849424   18976 mustload.go:65] Loading cluster: addons-118179
	I0311 20:11:12.849417   18976 addons.go:69] Setting cloud-spanner=true in profile "addons-118179"
	I0311 20:11:12.849395   18976 addons.go:69] Setting yakd=true in profile "addons-118179"
	I0311 20:11:12.849395   18976 addons.go:69] Setting ingress-dns=true in profile "addons-118179"
	I0311 20:11:12.849454   18976 addons.go:234] Setting addon cloud-spanner=true in "addons-118179"
	I0311 20:11:12.849440   18976 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-118179"
	I0311 20:11:12.849462   18976 addons.go:234] Setting addon yakd=true in "addons-118179"
	I0311 20:11:12.849462   18976 addons.go:234] Setting addon ingress-dns=true in "addons-118179"
	I0311 20:11:12.849460   18976 addons.go:69] Setting storage-provisioner=true in profile "addons-118179"
	I0311 20:11:12.849489   18976 addons.go:234] Setting addon storage-provisioner=true in "addons-118179"
	I0311 20:11:12.849496   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.849499   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.849506   18976 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-118179"
	I0311 20:11:12.849516   18976 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-118179"
	I0311 20:11:12.849519   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.849529   18976 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-118179"
	I0311 20:11:12.849539   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.849545   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.849598   18976 config.go:182] Loaded profile config "addons-118179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:11:12.849496   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.849747   18976 addons.go:234] Setting addon metrics-server=true in "addons-118179"
	I0311 20:11:12.849781   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.849780   18976 addons.go:69] Setting registry=true in profile "addons-118179"
	I0311 20:11:12.849816   18976 addons.go:234] Setting addon registry=true in "addons-118179"
	I0311 20:11:12.849850   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.849976   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.849993   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.850012   18976 addons.go:69] Setting helm-tiller=true in profile "addons-118179"
	I0311 20:11:12.850027   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850039   18976 addons.go:234] Setting addon helm-tiller=true in "addons-118179"
	I0311 20:11:12.850063   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.850137   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.850171   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.850185   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850193   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850361   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.850421   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.849405   18976 addons.go:69] Setting inspektor-gadget=true in profile "addons-118179"
	I0311 20:11:12.850536   18976 addons.go:234] Setting addon inspektor-gadget=true in "addons-118179"
	I0311 20:11:12.850566   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.850584   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.850614   18976 addons.go:69] Setting ingress=true in profile "addons-118179"
	I0311 20:11:12.850654   18976 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-118179"
	I0311 20:11:12.850681   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.850697   18976 addons.go:69] Setting volumesnapshots=true in profile "addons-118179"
	I0311 20:11:12.850722   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850725   18976 addons.go:234] Setting addon volumesnapshots=true in "addons-118179"
	I0311 20:11:12.850758   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.850879   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.850425   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.850687   18976 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-118179"
	I0311 20:11:12.850919   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850996   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.851077   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.851115   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850017   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.851271   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.851299   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850641   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850599   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.851681   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.849440   18976 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-118179"
	I0311 20:11:12.852427   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.852453   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850687   18976 addons.go:234] Setting addon ingress=true in "addons-118179"
	I0311 20:11:12.852641   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.852986   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.853012   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.850425   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.859037   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.871759   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37889
	I0311 20:11:12.872251   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
	I0311 20:11:12.872251   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44491
	I0311 20:11:12.872724   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.872806   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.873223   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.873245   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.873306   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.873696   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.873758   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.873775   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.873855   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.873870   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.874232   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.874347   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.874378   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.874574   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.874786   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.874833   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.875207   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.875234   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.888949   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I0311 20:11:12.889102   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40759
	I0311 20:11:12.889271   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I0311 20:11:12.889537   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.889647   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.890276   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.890366   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.890368   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.890379   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.890384   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.890745   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.890900   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.890919   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.890986   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43553
	I0311 20:11:12.891202   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.891261   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.891331   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.891906   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.891939   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.892197   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.892209   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.892535   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.892542   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.893391   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.893424   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.893608   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.893952   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.893976   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.894195   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0311 20:11:12.894236   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.894254   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.894429   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0311 20:11:12.894545   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.894949   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.894977   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.895037   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.895396   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.895596   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.895871   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.895887   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.896828   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40601
	I0311 20:11:12.897944   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.898516   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.898546   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.900516   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.900521   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.902690   18976 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0311 20:11:12.904171   18976 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0311 20:11:12.904188   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0311 20:11:12.904205   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.901158   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.904264   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.904630   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.905170   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.905216   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.907666   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.908588   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:12.908607   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.908789   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:12.908999   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:12.909149   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:12.909265   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:12.925297   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33257
	I0311 20:11:12.925804   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.926364   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.926382   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.926772   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.926985   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.930163   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33143
	I0311 20:11:12.930942   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.931257   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0311 20:11:12.931970   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.932106   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.932113   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.932530   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.932564   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.932614   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.932663   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34201
	I0311 20:11:12.932826   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.932985   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.933449   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.933463   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.933524   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0311 20:11:12.933664   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.934165   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.934241   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.934460   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.934494   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.934642   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.934656   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.935036   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.935095   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.935141   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.935180   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.937346   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0311 20:11:12.938867   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0311 20:11:12.940200   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0311 20:11:12.937917   18976 addons.go:234] Setting addon default-storageclass=true in "addons-118179"
	I0311 20:11:12.939380   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.939542   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.941491   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.943424   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0311 20:11:12.941870   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41621
	I0311 20:11:12.941869   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.942425   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.946146   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0311 20:11:12.945066   18976 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0311 20:11:12.945101   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.945113   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.945536   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42735
	I0311 20:11:12.945783   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.948856   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0311 20:11:12.947917   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.950343   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0311 20:11:12.948011   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.948454   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.948816   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
	I0311 20:11:12.948828   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0311 20:11:12.950562   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.953395   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0311 20:11:12.951994   18976 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 20:11:12.952112   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.952498   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.952680   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.954201   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.954234   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.956995   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0311 20:11:12.955459   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 20:11:12.955532   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.956041   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.956069   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.956677   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.956708   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0311 20:11:12.958332   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.958386   18976 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0311 20:11:12.958741   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.959202   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0311 20:11:12.959761   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.959838   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.959745   18976 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0311 20:11:12.961892   18976 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0311 20:11:12.961919   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0311 20:11:12.961936   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.959936   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0311 20:11:12.961990   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.960505   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.960505   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.960529   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.960531   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.960847   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.960875   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.961327   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I0311 20:11:12.962642   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.962652   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.962956   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.963150   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.963475   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.963499   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.963582   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.963808   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.963820   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.963878   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.963905   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.964178   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.964561   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.964654   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.964662   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.965406   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.965836   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.965859   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.966313   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.966484   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39467
	I0311 20:11:12.969638   18976 out.go:177]   - Using image docker.io/registry:2.8.3
	I0311 20:11:12.966846   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.967277   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.967626   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:12.967748   18976 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-118179"
	I0311 20:11:12.968479   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.969489   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.969879   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.970085   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:12.970491   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.970625   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:12.972792   18976 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0311 20:11:12.971497   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:12.971534   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:12.971556   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:12.971658   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:12.971680   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:12.971824   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.971849   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:12.971878   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:12.974476   18976 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0311 20:11:12.974502   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.974906   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.974924   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.974939   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.974958   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.975173   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:12.975896   18976 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0311 20:11:12.976093   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:12.976104   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:12.976116   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42015
	I0311 20:11:12.977462   18976 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0311 20:11:12.979532   18976 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 20:11:12.979552   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0311 20:11:12.979568   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.981108   18976 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0311 20:11:12.981123   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0311 20:11:12.981141   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.977559   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0311 20:11:12.981205   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.977594   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.977608   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39181
	I0311 20:11:12.977756   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:12.977805   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.977943   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:12.977887   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:12.978624   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.982834   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.982903   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.982929   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.983157   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.983175   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.983219   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:12.983233   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.983422   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.983436   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.983486   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:12.984176   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.984539   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:12.984642   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.985237   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.985457   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:12.985500   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:12.985767   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.987900   18976 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0311 20:11:12.985850   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:12.986189   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:12.986438   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:12.986561   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.987131   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:12.987159   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.989522   18976 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0311 20:11:12.989532   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0311 20:11:12.989546   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.989606   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.989647   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:12.989666   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.990447   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:12.990497   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:12.990559   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:12.990831   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I0311 20:11:12.991194   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:12.991250   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:12.991411   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:12.991934   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:12.991942   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.992027   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.994323   18976 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 20:11:12.992431   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.994284   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.995976   18976 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 20:11:12.994451   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:12.996005   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 20:11:12.996024   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:12.994806   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:12.994965   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:12.996107   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:12.996245   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:12.996367   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:12.996423   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:12.996544   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:12.997608   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:12.998765   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0311 20:11:12.999226   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:12.999374   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:12.999563   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.001372   18976 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0311 20:11:12.999827   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:12.999985   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:13.000002   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0311 20:11:13.000202   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:13.002987   18976 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0311 20:11:13.003003   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0311 20:11:13.003020   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:13.003069   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:13.003121   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.004004   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:13.004108   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:13.004310   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:13.004465   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:13.004479   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:13.004526   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:13.004779   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:13.005034   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:13.005143   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0311 20:11:13.006382   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:13.006946   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:13.006966   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:13.006995   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:13.007216   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:13.007396   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:13.009253   18976 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0311 20:11:13.007767   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:13.007801   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.008369   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:13.008403   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:13.009369   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0311 20:11:13.011145   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:13.011163   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.013275   18976 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 20:11:13.011396   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:13.011508   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:13.012802   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:13.013326   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:13.015963   18976 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 20:11:13.014741   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:13.014844   18976 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 20:11:13.015165   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:13.017513   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:13.017559   18976 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 20:11:13.018900   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0311 20:11:13.018917   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:13.018937   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 20:11:13.018952   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:13.018962   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:13.018992   18976 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0311 20:11:13.019284   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:13.020900   18976 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 20:11:13.020912   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0311 20:11:13.020926   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:13.021458   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:13.021479   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:13.021974   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.022387   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:13.022420   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.022597   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.022649   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:13.022809   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:13.022926   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:13.022998   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:13.023017   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.023063   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:13.023167   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:13.023323   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:13.023472   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:13.023608   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:13.023875   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.024238   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:13.024263   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.024421   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:13.024563   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:13.024684   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:13.024798   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:13.043845   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0311 20:11:13.044196   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:13.044703   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:13.044728   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:13.045035   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:13.045246   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:13.046697   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:13.048692   18976 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0311 20:11:13.050085   18976 out.go:177]   - Using image docker.io/busybox:stable
	I0311 20:11:13.051737   18976 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 20:11:13.051751   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0311 20:11:13.051768   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:13.054772   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.055204   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:13.055226   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:13.055395   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:13.055561   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:13.055682   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:13.055798   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:13.413254   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 20:11:13.455484   18976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:11:13.455499   18976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 20:11:13.456025   18976 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0311 20:11:13.456058   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0311 20:11:13.477812   18976 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0311 20:11:13.477834   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0311 20:11:13.504943   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 20:11:13.521427   18976 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0311 20:11:13.521448   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0311 20:11:13.553976   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 20:11:13.568542   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0311 20:11:13.569244   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 20:11:13.587319   18976 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0311 20:11:13.587343   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0311 20:11:13.598787   18976 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0311 20:11:13.598803   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0311 20:11:13.599848   18976 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0311 20:11:13.599862   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0311 20:11:13.629901   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 20:11:13.635978   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 20:11:13.645132   18976 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 20:11:13.645147   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0311 20:11:13.647688   18976 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0311 20:11:13.647702   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0311 20:11:13.666910   18976 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0311 20:11:13.666936   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0311 20:11:13.704024   18976 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0311 20:11:13.704042   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0311 20:11:13.744790   18976 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0311 20:11:13.744822   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0311 20:11:13.786803   18976 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0311 20:11:13.786828   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0311 20:11:13.840218   18976 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0311 20:11:13.840246   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0311 20:11:13.847869   18976 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 20:11:13.847884   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 20:11:13.895953   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0311 20:11:13.900846   18976 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0311 20:11:13.900866   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0311 20:11:13.910591   18976 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0311 20:11:13.910614   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0311 20:11:13.926942   18976 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0311 20:11:13.926964   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0311 20:11:13.960699   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0311 20:11:14.048570   18976 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 20:11:14.048616   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 20:11:14.066681   18976 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0311 20:11:14.066708   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0311 20:11:14.163014   18976 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0311 20:11:14.163039   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0311 20:11:14.164791   18976 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0311 20:11:14.164813   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0311 20:11:14.229644   18976 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0311 20:11:14.229677   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0311 20:11:14.307090   18976 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0311 20:11:14.307110   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0311 20:11:14.346800   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 20:11:14.462973   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0311 20:11:14.527268   18976 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0311 20:11:14.527292   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0311 20:11:14.653293   18976 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0311 20:11:14.653316   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0311 20:11:14.696629   18976 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0311 20:11:14.696655   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0311 20:11:14.881669   18976 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 20:11:14.881708   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0311 20:11:14.975302   18976 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0311 20:11:14.975326   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0311 20:11:15.031677   18976 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0311 20:11:15.031708   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0311 20:11:15.120029   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 20:11:15.186739   18976 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 20:11:15.186760   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0311 20:11:15.706832   18976 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0311 20:11:15.706854   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0311 20:11:15.820238   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 20:11:16.096903   18976 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0311 20:11:16.096933   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0311 20:11:16.415771   18976 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 20:11:16.415790   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0311 20:11:16.839821   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 20:11:19.562037   18976 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0311 20:11:19.562073   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:19.565989   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:19.566421   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:19.566450   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:19.566683   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:19.566923   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:19.567122   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:19.567263   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:20.398711   18976 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0311 20:11:20.537095   18976 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.081579809s)
	I0311 20:11:20.537220   18976 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.081695203s)
	I0311 20:11:20.537253   18976 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0311 20:11:20.537323   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.032353362s)
	I0311 20:11:20.537370   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.537386   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.537410   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.983407243s)
	I0311 20:11:20.537450   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.537462   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.537475   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.968910251s)
	I0311 20:11:20.537501   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.537521   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.537588   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.968284999s)
	I0311 20:11:20.537607   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.537615   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.538385   18976 node_ready.go:35] waiting up to 6m0s for node "addons-118179" to be "Ready" ...
	I0311 20:11:20.538640   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:20.538678   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.538685   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.538683   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.125398741s)
	I0311 20:11:20.538693   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.538701   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.538708   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.538719   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.538927   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:20.538968   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.538984   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.539000   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.539016   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.539020   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:20.538999   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.539073   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.539092   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.539105   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.539323   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.539337   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.539362   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.539369   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.539464   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:20.539498   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.539513   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.539548   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.539559   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.539567   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:20.539583   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:20.539613   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.539622   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.539551   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.539679   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.539766   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.539779   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.539979   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.539993   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.540368   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:20.540395   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.540402   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.558322   18976 node_ready.go:49] node "addons-118179" has status "Ready":"True"
	I0311 20:11:20.558348   18976 node_ready.go:38] duration metric: took 19.925568ms for node "addons-118179" to be "Ready" ...
	I0311 20:11:20.558361   18976 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:11:20.644772   18976 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5zh4q" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:20.661353   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:20.661374   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:20.661665   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:20.661673   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:20.661683   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:20.687531   18976 addons.go:234] Setting addon gcp-auth=true in "addons-118179"
	I0311 20:11:20.687576   18976 host.go:66] Checking if "addons-118179" exists ...
	I0311 20:11:20.687854   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:20.687881   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:20.702198   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0311 20:11:20.702528   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:20.702961   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:20.702982   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:20.703293   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:20.703888   18976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:11:20.703924   18976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:11:20.717863   18976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0311 20:11:20.718245   18976 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:11:20.718656   18976 main.go:141] libmachine: Using API Version  1
	I0311 20:11:20.718680   18976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:11:20.718965   18976 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:11:20.719160   18976 main.go:141] libmachine: (addons-118179) Calling .GetState
	I0311 20:11:20.720817   18976 main.go:141] libmachine: (addons-118179) Calling .DriverName
	I0311 20:11:20.721021   18976 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0311 20:11:20.721047   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHHostname
	I0311 20:11:20.724013   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:20.724448   18976 main.go:141] libmachine: (addons-118179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:83", ip: ""} in network mk-addons-118179: {Iface:virbr1 ExpiryTime:2024-03-11 21:10:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:83 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-118179 Clientid:01:52:54:00:ed:0e:83}
	I0311 20:11:20.724473   18976 main.go:141] libmachine: (addons-118179) DBG | domain addons-118179 has defined IP address 192.168.39.50 and MAC address 52:54:00:ed:0e:83 in network mk-addons-118179
	I0311 20:11:20.724651   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHPort
	I0311 20:11:20.724814   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHKeyPath
	I0311 20:11:20.724981   18976 main.go:141] libmachine: (addons-118179) Calling .GetSSHUsername
	I0311 20:11:20.725125   18976 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/addons-118179/id_rsa Username:docker}
	I0311 20:11:21.073820   18976 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-118179" context rescaled to 1 replicas
	I0311 20:11:22.657651   18976 pod_ready.go:102] pod "coredns-5dd5756b68-5zh4q" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:24.039029   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.403019712s)
	I0311 20:11:24.039081   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.143103565s)
	I0311 20:11:24.039102   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039117   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039130   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.078394864s)
	I0311 20:11:24.039167   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039188   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039222   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.409287639s)
	I0311 20:11:24.039245   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039265   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039084   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039319   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.692488252s)
	I0311 20:11:24.039329   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039352   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039356   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.576351659s)
	I0311 20:11:24.039365   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039377   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039387   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039488   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.919421474s)
	W0311 20:11:24.039519   18976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 20:11:24.039535   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.039541   18976 retry.go:31] will retry after 290.87054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 20:11:24.039566   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.039576   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.039584   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039591   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039614   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.219341337s)
	I0311 20:11:24.039633   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039642   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039648   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.039669   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.039675   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.039683   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039690   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039720   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.039729   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.039737   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039746   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039749   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.039774   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.039789   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.039792   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.039801   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.039810   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039819   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039825   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.039835   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.039843   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.039850   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.039872   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.039810   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.039883   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.039891   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.039899   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.039907   18976 addons.go:470] Verifying addon registry=true in "addons-118179"
	I0311 20:11:24.043623   18976 out.go:177] * Verifying registry addon...
	I0311 20:11:24.040148   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.040172   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.040192   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.040208   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.040222   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.040237   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.040254   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.040271   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.039891   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.042301   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.042324   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.045048   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.045060   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.045067   18976 addons.go:470] Verifying addon metrics-server=true in "addons-118179"
	I0311 20:11:24.045050   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.045082   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.045102   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.045117   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.046657   18976 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-118179 service yakd-dashboard -n yakd-dashboard
	
	I0311 20:11:24.045134   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.045145   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.045369   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.045395   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.045489   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.045554   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.045936   18976 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0311 20:11:24.046731   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.046736   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.048092   18976 addons.go:470] Verifying addon ingress=true in "addons-118179"
	I0311 20:11:24.049492   18976 out.go:177] * Verifying ingress addon...
	I0311 20:11:24.051744   18976 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0311 20:11:24.070232   18976 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0311 20:11:24.070255   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:24.073163   18976 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0311 20:11:24.073183   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:24.086268   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:24.086284   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:24.086621   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:24.086638   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:24.086653   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:24.331443   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 20:11:24.563045   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:24.564566   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:25.219016   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:25.219070   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:25.304209   18976 pod_ready.go:102] pod "coredns-5dd5756b68-5zh4q" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:25.505566   18976 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.784519044s)
	I0311 20:11:25.507581   18976 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 20:11:25.508968   18976 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0311 20:11:25.510422   18976 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0311 20:11:25.510436   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0311 20:11:25.508255   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.668378352s)
	I0311 20:11:25.510529   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:25.510550   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:25.510839   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:25.510855   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:25.510864   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:25.510871   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:25.510873   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:25.511063   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:25.511095   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:25.511114   18976 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-118179"
	I0311 20:11:25.512659   18976 out.go:177] * Verifying csi-hostpath-driver addon...
	I0311 20:11:25.514669   18976 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0311 20:11:25.545591   18976 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0311 20:11:25.545611   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:25.579961   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:25.581376   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:25.651340   18976 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0311 20:11:25.651363   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0311 20:11:25.747534   18976 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 20:11:25.747559   18976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0311 20:11:25.792259   18976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 20:11:26.025630   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:26.054025   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:26.059770   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:26.521616   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:26.558139   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:26.559428   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:27.026393   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:27.064294   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:27.064637   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:27.526469   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:27.563661   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:27.570788   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:27.612775   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.281261647s)
	I0311 20:11:27.612841   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:27.612860   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:27.613148   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:27.613196   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:27.613209   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:27.613228   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:27.613237   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:27.613429   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:27.613483   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:27.613524   18976 main.go:141] libmachine: (addons-118179) DBG | Closing plugin on server side
	I0311 20:11:27.658436   18976 pod_ready.go:102] pod "coredns-5dd5756b68-5zh4q" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:27.855458   18976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.063160087s)
	I0311 20:11:27.855503   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:27.855521   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:27.855871   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:27.855887   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:27.855897   18976 main.go:141] libmachine: Making call to close driver server
	I0311 20:11:27.855906   18976 main.go:141] libmachine: (addons-118179) Calling .Close
	I0311 20:11:27.856161   18976 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:11:27.856184   18976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:11:27.858031   18976 addons.go:470] Verifying addon gcp-auth=true in "addons-118179"
	I0311 20:11:27.859690   18976 out.go:177] * Verifying gcp-auth addon...
	I0311 20:11:27.862038   18976 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0311 20:11:27.879963   18976 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0311 20:11:27.879980   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:28.029348   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:28.052849   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:28.057204   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:28.152685   18976 pod_ready.go:97] pod "coredns-5dd5756b68-5zh4q" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-11 20:11:13 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-11 20:11:13 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-11 20:11:13 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-11 20:11:13 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.50 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-11 20:11:13 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-11 20:11:17 +0000 UTC,FinishedAt:2024-03-11 20:11:27 +0000 UTC,ContainerID:cri-o://e30d12a6edaf4fc81acdeec98a9c69f731c41dd748be5b3389665cd16657caa6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://e30d12a6edaf4fc81acdeec98a9c69f731c41dd748be5b3389665cd16657caa6 Started:0xc002a97080 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0311 20:11:28.152767   18976 pod_ready.go:81] duration metric: took 7.507969247s for pod "coredns-5dd5756b68-5zh4q" in "kube-system" namespace to be "Ready" ...
	E0311 20:11:28.152780   18976 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-5zh4q" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-11 20:11:13 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-11 20:11:13 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-11 20:11:13 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-11 20:11:13 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.50 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-11 20:11:13 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-11 20:11:17 +0000 UTC,FinishedAt:2024-03-11 20:11:27 +0000 UTC,ContainerID:cri-o://e30d12a6edaf4fc81acdeec98a9c69f731c41dd748be5b3389665cd16657caa6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://e30d12a6edaf4fc81acdeec98a9c69f731c41dd748be5b3389665cd16657caa6 Started:0xc002a97080 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0311 20:11:28.152794   18976 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:28.367123   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:28.520834   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:28.552276   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:28.555958   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:28.867764   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:29.021201   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:29.051975   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:29.055734   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:29.367164   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:29.521094   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:29.551209   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:29.555505   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:29.866258   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:30.021097   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:30.052403   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:30.056334   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:30.159047   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:30.368128   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:30.520460   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:30.550878   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:30.555738   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:30.865841   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:31.217808   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:31.218800   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:31.219649   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:31.370463   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:31.520983   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:31.551387   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:31.556221   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:31.866025   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:32.021069   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:32.052354   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:32.056213   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:32.165757   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:32.366422   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:32.522854   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:32.555961   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:32.560379   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:32.866339   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:33.020897   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:33.051756   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:33.056138   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:33.367228   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:33.527571   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:33.551638   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:33.555836   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:33.866501   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:34.021841   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:34.051754   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:34.056034   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:34.366835   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:34.520837   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:34.551641   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:34.558200   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:34.659387   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:34.868209   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:35.020938   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:35.053096   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:35.059761   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:35.452521   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:35.522042   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:35.551281   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:35.555834   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:35.867193   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:36.020842   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:36.051793   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:36.056986   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:36.367736   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:36.521470   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:36.552014   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:36.556047   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:36.866887   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:37.022488   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:37.054856   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:37.058487   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:37.158674   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:37.366477   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:37.521607   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:37.551278   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:37.555618   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:37.867227   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:38.021057   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:38.054001   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:38.056723   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:38.367423   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:38.521347   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:38.552994   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:38.556208   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:38.866093   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:39.020411   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:39.052030   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:39.055499   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:39.160062   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:39.366588   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:39.565152   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:39.579436   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:39.579578   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:40.357380   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:40.357940   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:40.359395   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:40.365058   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:40.367315   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:40.520996   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:40.557498   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:40.562669   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:40.865723   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:41.021343   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:41.052274   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:41.056303   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:41.366565   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:41.520605   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:41.551940   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:41.556477   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:41.659284   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:41.865886   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:42.020245   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:42.051979   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:42.058165   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:42.366281   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:42.521629   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:42.551768   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:42.555784   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:42.869851   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:43.020366   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:43.052101   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:43.056183   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:43.366569   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:43.520873   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:43.551327   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:43.556678   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:43.659433   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:43.866689   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:44.025240   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:44.057138   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:44.058844   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:44.377321   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:44.520658   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:44.553715   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:44.557400   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:44.867043   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:45.020659   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:45.052354   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:45.057305   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:45.366287   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:45.524707   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:45.551249   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:45.555210   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:45.813721   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:45.869786   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:46.019824   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:46.056923   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:46.061956   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:46.367900   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:46.521771   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:46.551121   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:46.556582   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:46.865855   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:47.019670   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:47.051182   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:47.056251   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:47.366304   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:47.521100   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:47.551938   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:47.557651   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:47.866741   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:48.020791   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:48.051137   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:48.056994   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:48.159819   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:48.367344   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:48.520541   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:48.554095   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:48.560471   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:48.871727   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:49.021018   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:49.051684   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:49.055968   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:49.367317   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:49.520495   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:49.551429   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:49.556563   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:49.865684   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:50.022303   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:50.052490   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:50.055686   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:50.160761   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:50.366658   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:50.521474   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:50.552187   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:50.556360   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:50.865998   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:51.020521   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:51.052520   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:51.056283   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:51.366289   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:51.522012   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:51.551911   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:51.556336   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:51.866125   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:52.020087   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:52.051904   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:52.055777   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:52.366184   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:52.529747   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:52.554324   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:52.557758   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:52.660178   18976 pod_ready.go:102] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"False"
	I0311 20:11:52.868335   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:53.020507   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:53.054738   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:53.056721   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:53.365705   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:53.520894   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:53.551487   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 20:11:53.555779   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:53.868845   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:54.020896   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:54.051976   18976 kapi.go:107] duration metric: took 30.006038251s to wait for kubernetes.io/minikube-addons=registry ...
	I0311 20:11:54.056005   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:54.367200   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:54.521539   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:54.558543   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:54.665274   18976 pod_ready.go:92] pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace has status "Ready":"True"
	I0311 20:11:54.665298   18976 pod_ready.go:81] duration metric: took 26.512494534s for pod "coredns-5dd5756b68-hmxgl" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.665310   18976 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-118179" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.672686   18976 pod_ready.go:92] pod "etcd-addons-118179" in "kube-system" namespace has status "Ready":"True"
	I0311 20:11:54.672710   18976 pod_ready.go:81] duration metric: took 7.391905ms for pod "etcd-addons-118179" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.672721   18976 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-118179" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.678852   18976 pod_ready.go:92] pod "kube-apiserver-addons-118179" in "kube-system" namespace has status "Ready":"True"
	I0311 20:11:54.678877   18976 pod_ready.go:81] duration metric: took 6.147633ms for pod "kube-apiserver-addons-118179" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.678890   18976 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-118179" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.693670   18976 pod_ready.go:92] pod "kube-controller-manager-addons-118179" in "kube-system" namespace has status "Ready":"True"
	I0311 20:11:54.693696   18976 pod_ready.go:81] duration metric: took 14.79726ms for pod "kube-controller-manager-addons-118179" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.693712   18976 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-875cw" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.704220   18976 pod_ready.go:92] pod "kube-proxy-875cw" in "kube-system" namespace has status "Ready":"True"
	I0311 20:11:54.704246   18976 pod_ready.go:81] duration metric: took 10.525071ms for pod "kube-proxy-875cw" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.704261   18976 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-118179" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:54.871744   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:55.021153   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:55.055914   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:55.057648   18976 pod_ready.go:92] pod "kube-scheduler-addons-118179" in "kube-system" namespace has status "Ready":"True"
	I0311 20:11:55.057671   18976 pod_ready.go:81] duration metric: took 353.401329ms for pod "kube-scheduler-addons-118179" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:55.057682   18976 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fkxwj" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:55.366161   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:55.457071   18976 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-fkxwj" in "kube-system" namespace has status "Ready":"True"
	I0311 20:11:55.457092   18976 pod_ready.go:81] duration metric: took 399.403108ms for pod "nvidia-device-plugin-daemonset-fkxwj" in "kube-system" namespace to be "Ready" ...
	I0311 20:11:55.457110   18976 pod_ready.go:38] duration metric: took 34.898736583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:11:55.457125   18976 api_server.go:52] waiting for apiserver process to appear ...
	I0311 20:11:55.457198   18976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:11:55.507717   18976 api_server.go:72] duration metric: took 42.662427783s to wait for apiserver process to appear ...
	I0311 20:11:55.507744   18976 api_server.go:88] waiting for apiserver healthz status ...
	I0311 20:11:55.507768   18976 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0311 20:11:55.512531   18976 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0311 20:11:55.513975   18976 api_server.go:141] control plane version: v1.28.4
	I0311 20:11:55.513997   18976 api_server.go:131] duration metric: took 6.244595ms to wait for apiserver health ...
	I0311 20:11:55.514007   18976 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 20:11:55.519396   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:55.557657   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:55.663763   18976 system_pods.go:59] 18 kube-system pods found
	I0311 20:11:55.663795   18976 system_pods.go:61] "coredns-5dd5756b68-hmxgl" [0919dc44-37b7-44f5-a43d-6c181b3205d2] Running
	I0311 20:11:55.663805   18976 system_pods.go:61] "csi-hostpath-attacher-0" [b75fc6d5-7127-475d-98eb-5aef17d18407] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 20:11:55.663814   18976 system_pods.go:61] "csi-hostpath-resizer-0" [98e28188-80ac-4355-9b78-ab9b382862fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 20:11:55.663840   18976 system_pods.go:61] "csi-hostpathplugin-7lk4t" [0d926afa-937c-4e6b-aa6f-e85805d579b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 20:11:55.663846   18976 system_pods.go:61] "etcd-addons-118179" [faf0ea05-72d3-4cf4-9960-d33c799c7208] Running
	I0311 20:11:55.663853   18976 system_pods.go:61] "kube-apiserver-addons-118179" [361b6804-03a4-4325-a06e-c35e1e6998c2] Running
	I0311 20:11:55.663859   18976 system_pods.go:61] "kube-controller-manager-addons-118179" [5f2ce157-6f45-4c41-b8d9-72f60d009ac3] Running
	I0311 20:11:55.663865   18976 system_pods.go:61] "kube-ingress-dns-minikube" [f473aa81-6f8d-4fe8-af58-2b497e88e3a0] Running
	I0311 20:11:55.663873   18976 system_pods.go:61] "kube-proxy-875cw" [fff8a34e-a286-44af-b9b4-d58337259a79] Running
	I0311 20:11:55.663878   18976 system_pods.go:61] "kube-scheduler-addons-118179" [deefff26-58e3-4a65-8f82-96e48f599679] Running
	I0311 20:11:55.663886   18976 system_pods.go:61] "metrics-server-69cf46c98-rngft" [2972db78-e263-4e81-ae94-b595ca23332c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 20:11:55.663892   18976 system_pods.go:61] "nvidia-device-plugin-daemonset-fkxwj" [43b433a3-3b3c-4cf4-a6c9-f11d6986e1a2] Running
	I0311 20:11:55.663900   18976 system_pods.go:61] "registry-9xb76" [3903cf06-c0ac-4d15-a746-05339675f06d] Running
	I0311 20:11:55.663905   18976 system_pods.go:61] "registry-proxy-6lhvc" [4094d1fb-0775-4dc1-b7b3-22fbe462ee70] Running
	I0311 20:11:55.663915   18976 system_pods.go:61] "snapshot-controller-58dbcc7b99-4nv8s" [f6549098-66e9-4d4b-b258-5f76c90b0a35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 20:11:55.663924   18976 system_pods.go:61] "snapshot-controller-58dbcc7b99-cn4j4" [01207a1b-381b-4846-ae53-0191c8174769] Running
	I0311 20:11:55.663930   18976 system_pods.go:61] "storage-provisioner" [c49225e1-b1da-4ad7-bdd2-655ce1760e47] Running
	I0311 20:11:55.663937   18976 system_pods.go:61] "tiller-deploy-7b677967b9-zqbdm" [ade214cb-6f64-48e5-bcbb-916b4343fd3b] Running
	I0311 20:11:55.663948   18976 system_pods.go:74] duration metric: took 149.931522ms to wait for pod list to return data ...
	I0311 20:11:55.663963   18976 default_sa.go:34] waiting for default service account to be created ...
	I0311 20:11:55.857229   18976 default_sa.go:45] found service account: "default"
	I0311 20:11:55.857251   18976 default_sa.go:55] duration metric: took 193.28147ms for default service account to be created ...
	I0311 20:11:55.857262   18976 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 20:11:55.911027   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:56.022151   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:56.057570   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:56.068073   18976 system_pods.go:86] 18 kube-system pods found
	I0311 20:11:56.068097   18976 system_pods.go:89] "coredns-5dd5756b68-hmxgl" [0919dc44-37b7-44f5-a43d-6c181b3205d2] Running
	I0311 20:11:56.068104   18976 system_pods.go:89] "csi-hostpath-attacher-0" [b75fc6d5-7127-475d-98eb-5aef17d18407] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 20:11:56.068111   18976 system_pods.go:89] "csi-hostpath-resizer-0" [98e28188-80ac-4355-9b78-ab9b382862fc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 20:11:56.068123   18976 system_pods.go:89] "csi-hostpathplugin-7lk4t" [0d926afa-937c-4e6b-aa6f-e85805d579b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 20:11:56.068131   18976 system_pods.go:89] "etcd-addons-118179" [faf0ea05-72d3-4cf4-9960-d33c799c7208] Running
	I0311 20:11:56.068136   18976 system_pods.go:89] "kube-apiserver-addons-118179" [361b6804-03a4-4325-a06e-c35e1e6998c2] Running
	I0311 20:11:56.068140   18976 system_pods.go:89] "kube-controller-manager-addons-118179" [5f2ce157-6f45-4c41-b8d9-72f60d009ac3] Running
	I0311 20:11:56.068148   18976 system_pods.go:89] "kube-ingress-dns-minikube" [f473aa81-6f8d-4fe8-af58-2b497e88e3a0] Running
	I0311 20:11:56.068151   18976 system_pods.go:89] "kube-proxy-875cw" [fff8a34e-a286-44af-b9b4-d58337259a79] Running
	I0311 20:11:56.068155   18976 system_pods.go:89] "kube-scheduler-addons-118179" [deefff26-58e3-4a65-8f82-96e48f599679] Running
	I0311 20:11:56.068160   18976 system_pods.go:89] "metrics-server-69cf46c98-rngft" [2972db78-e263-4e81-ae94-b595ca23332c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 20:11:56.068166   18976 system_pods.go:89] "nvidia-device-plugin-daemonset-fkxwj" [43b433a3-3b3c-4cf4-a6c9-f11d6986e1a2] Running
	I0311 20:11:56.068171   18976 system_pods.go:89] "registry-9xb76" [3903cf06-c0ac-4d15-a746-05339675f06d] Running
	I0311 20:11:56.068174   18976 system_pods.go:89] "registry-proxy-6lhvc" [4094d1fb-0775-4dc1-b7b3-22fbe462ee70] Running
	I0311 20:11:56.068180   18976 system_pods.go:89] "snapshot-controller-58dbcc7b99-4nv8s" [f6549098-66e9-4d4b-b258-5f76c90b0a35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 20:11:56.068187   18976 system_pods.go:89] "snapshot-controller-58dbcc7b99-cn4j4" [01207a1b-381b-4846-ae53-0191c8174769] Running
	I0311 20:11:56.068191   18976 system_pods.go:89] "storage-provisioner" [c49225e1-b1da-4ad7-bdd2-655ce1760e47] Running
	I0311 20:11:56.068195   18976 system_pods.go:89] "tiller-deploy-7b677967b9-zqbdm" [ade214cb-6f64-48e5-bcbb-916b4343fd3b] Running
	I0311 20:11:56.068200   18976 system_pods.go:126] duration metric: took 210.932709ms to wait for k8s-apps to be running ...
	I0311 20:11:56.068206   18976 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 20:11:56.068245   18976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:11:56.103238   18976 system_svc.go:56] duration metric: took 35.024866ms WaitForService to wait for kubelet
	I0311 20:11:56.103265   18976 kubeadm.go:576] duration metric: took 43.257985258s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:11:56.103286   18976 node_conditions.go:102] verifying NodePressure condition ...
	I0311 20:11:56.259016   18976 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:11:56.259042   18976 node_conditions.go:123] node cpu capacity is 2
	I0311 20:11:56.259057   18976 node_conditions.go:105] duration metric: took 155.764842ms to run NodePressure ...
	I0311 20:11:56.259068   18976 start.go:240] waiting for startup goroutines ...
	I0311 20:11:56.370547   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:56.520732   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:56.555753   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:56.866666   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:57.022132   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:57.058654   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:57.366184   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:57.527729   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:57.557021   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:57.865779   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:58.021482   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:58.057039   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:58.366527   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:58.520808   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:58.561681   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:59.032201   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:59.038981   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:59.065198   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:59.366811   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:11:59.519511   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:11:59.555805   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:11:59.866465   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:00.020566   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:00.056335   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:00.366378   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:00.521118   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:00.558327   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:00.870390   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:01.021136   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:01.058192   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:01.366388   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:01.521233   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:01.556112   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:01.866685   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:02.020534   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:02.055793   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:02.365522   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:02.522622   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:02.557207   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:02.865691   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:03.025131   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:03.066633   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:03.366683   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:03.522512   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:03.556238   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:03.865848   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:04.021354   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:04.057918   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:04.366836   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:04.522312   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:04.556977   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:05.072973   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:05.073236   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:05.078956   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:05.367049   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:05.520591   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:05.556400   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:05.875921   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:06.031222   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:06.059180   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:06.366909   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:06.522646   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:06.558345   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:06.868589   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:07.021072   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:07.056592   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:07.366343   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:07.520939   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:07.556369   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:07.866919   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:08.023744   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:08.059189   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:08.366069   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:08.520250   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:08.556634   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:08.865732   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:09.022420   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:09.057038   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:09.365895   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:09.706962   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:09.707973   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:09.875183   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:10.020080   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:10.056434   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:10.369392   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:10.521191   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:10.556415   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:10.866603   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:11.022169   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:11.056648   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:11.365569   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:11.521235   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:11.559546   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:11.865891   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:12.020465   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:12.057239   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:12.366354   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:12.526619   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:12.555838   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:12.869907   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:13.024540   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:13.057545   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:13.367359   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:13.521745   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:13.556371   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:13.866070   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:14.020620   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:14.175470   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:14.366571   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:14.521754   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:14.557702   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:14.865167   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:15.020640   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:15.056770   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:15.366575   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:15.522134   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:15.556725   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:15.865686   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:16.030331   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:16.060425   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:16.366386   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:16.520945   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:16.557304   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:16.870162   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:17.020711   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:17.056708   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:17.680243   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:17.684468   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:17.684673   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:17.865554   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:18.024950   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:18.060157   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:18.366916   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:18.520687   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:18.557596   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:18.867172   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:19.020187   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:19.057240   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:19.366550   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:19.521539   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:19.559137   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:19.866098   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:20.026995   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:20.061782   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:20.368990   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:20.520976   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:20.556330   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:20.866404   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:21.021854   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:21.056824   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:21.366114   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:21.521341   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:21.556580   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:21.871579   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:22.021514   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:22.057853   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:22.373826   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:22.532177   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:22.557875   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:22.866335   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:23.020967   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:23.061472   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:23.365323   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:23.520121   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:23.556580   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:23.865790   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:24.020670   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:24.056341   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:24.368592   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:24.521503   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:24.559003   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:24.866291   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:25.021544   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:25.056499   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:25.367344   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:25.530458   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:25.563113   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:25.866241   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:26.304394   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:26.307944   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:26.366133   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:26.522363   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:26.556272   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:26.867101   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:27.020339   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 20:12:27.056752   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:27.366301   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:27.521319   18976 kapi.go:107] duration metric: took 1m2.006648825s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0311 20:12:27.557110   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:27.866080   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:28.056797   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:28.369273   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:28.558317   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:28.866372   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:29.056557   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:29.366866   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:29.556317   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:29.869496   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:30.056658   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:30.366885   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:30.556933   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:30.868556   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:31.056813   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:31.366012   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:31.556475   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:31.866454   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:32.058045   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:32.620912   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:32.621199   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:32.866084   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:33.057087   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:33.366790   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:33.557754   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:33.871568   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:34.059049   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:34.366155   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:34.556289   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:34.866687   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:35.058813   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:35.547388   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:35.557166   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:35.867085   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:36.056214   18976 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 20:12:36.367211   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:36.571981   18976 kapi.go:107] duration metric: took 1m12.520233344s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0311 20:12:36.866141   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:37.366053   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:37.873234   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:38.366493   18976 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 20:12:38.866140   18976 kapi.go:107] duration metric: took 1m11.004098624s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0311 20:12:38.867895   18976 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-118179 cluster.
	I0311 20:12:38.869377   18976 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0311 20:12:38.870730   18976 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0311 20:12:38.872124   18976 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, inspektor-gadget, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0311 20:12:38.873366   18976 addons.go:505] duration metric: took 1m26.028049271s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server inspektor-gadget helm-tiller yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0311 20:12:38.873398   18976 start.go:245] waiting for cluster config update ...
	I0311 20:12:38.873413   18976 start.go:254] writing updated cluster config ...
	I0311 20:12:38.873642   18976 ssh_runner.go:195] Run: rm -f paused
	I0311 20:12:38.923594   18976 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 20:12:38.925461   18976 out.go:177] * Done! kubectl is now configured to use "addons-118179" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.671418334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710188135671345125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563325,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7096abce-03ae-451e-ad35-d4842313f37b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.672159111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32dc3f9f-508e-4ec7-abe6-71280e0b2708 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.672460716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32dc3f9f-508e-4ec7-abe6-71280e0b2708 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.673140148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:949d655b6adad0e70f44f8048baf19f648b6f160c04fdd7ae3d75f2ba165bbe4,PodSandboxId:259572018db0358570b70172d69a11f30d4405d97e978284febea223b02a2ce2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710188127475575102,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-d8jdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 880146a6-e693-44b3-9453-0c3dfc82e4a6,},Annotations:map[string]string{io.kubernetes.container.hash: 6de59312,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89880dcafa47dfe2752931f85156e5d1bc679539dc46fe04d6792fb2ce76e0f,PodSandboxId:56b48f8ecf0a2cf8041dee427092b69a9336fa950218d983b91d74d162754d0c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710188004640166905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-9hb9p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 67df60ae-ad11-4767-b3eb-ccfceb9799a9,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d10c557d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c659fdc24ce30c9f825069c19cb5273f57692dc8dfbf1e1709355ebf2ba72444,PodSandboxId:fe08a3d7b3846f87747a791a6a315a0e65a7661af3a3b65165fc93389c2a0a80,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710187988852766042,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: a141ff9e-e505-4bd1-ac33-95eb2183ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 58cbe66,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabb315f957c64d36628aa18c064abebf8f3d67179673558982b540ab45139e7,PodSandboxId:509b89ed2e606cc4e7445fae10c6338735831bb18310d94df6bb1f5b9f1df2ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1710187957644888234,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-5lxzk,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9677c2c3-7bb3-4fc5-a683-6d496f54593e,},Annotations:map[string]string{io.kubernetes.container.hash: 4750faf6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46646564686ce95743639d3cbebd586e920c24e9b1d7bbf9bff4898e807945d0,PodSandboxId:d72b56b69121138704370d5c7de70c45885dc44019d0587e84aadb3ed43e3719,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAIN
ER_EXITED,CreatedAt:1710187934381908974,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7zgj9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c70aed1-fd7a-4598-b3ce-3d5d04bc62c6,},Annotations:map[string]string{io.kubernetes.container.hash: af80f792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7494df3f8363aac6a872128d1587f67aa71ff661c83cd9fd6625b560987ec35c,PodSandboxId:2db2b5102a05283d3910c5aace31a6189de801e5e72a9fa3f349a24e7f5693d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a
1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710187934265226019,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7zt97,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff650163-9820-4984-80be-e98af6572e34,},Annotations:map[string]string{io.kubernetes.container.hash: 2706cbc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59c49deef4d2c1addb7bec2cdb6963f27ddbbd7bdb20597018d38ca6653eba5,PodSandboxId:9a4a5bb81f7716027dc13c062fee27d6d57b9eb220fd90f0412924e4715d8618,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710187929824656990,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-ttwqt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb8f70d5-1cf3-44eb-89cf-5d530bebda0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9431b0d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc893e6380cb372874f04c9704be7997538b6195be2c9a39db45c8d3a25df99,PodSandboxId:d0e7dc8b0e7b0c5fe39c9c4031a7b956e9a11ab2304d0c19c34bae408cd7d242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710187881232514066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49225e1-b1da-4ad7-bdd2-655ce1760e47,},Annotations:map[string]string{io.kubernetes.container.hash: 2f5e6333,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21ccf11f2216269811f5e7e4717e76f14289e4e2691562b14ecb6696536b4d2,PodSandboxId:dde9cf57f0ff60559dec12104c36f9301fc4b0c5e8220521d0de90f97187f5ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710187876360411680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hmxgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919dc44-37b7-44f5-a43d-6c181b3205d2,},Annotations:map[string]string{io.kubernetes.container.hash: 15693c44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b454639944f8749341ef1de1d9662b0a72d1500ff95f7a9bbd8a9dc93543f75,PodSandboxId:5d588a899805c1b175c4b7f784d690f01749a7ec9b8178c5747483b6f3f01bea
,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710187876156799145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-875cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff8a34e-a286-44af-b9b4-d58337259a79,},Annotations:map[string]string{io.kubernetes.container.hash: c34cbd62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69ba60484698ed657204d9ec1e69ac44072f72bf176f465e052e471fe3a06ff,PodSandboxId:a7b0900d4a81c3a208b2dd1431160613ac641676d00c717be0ca9110bb9d6808,Metadata:&ContainerMetadata{Name
:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710187854361110899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617f297a7d9b89d28d441dd49ca22783,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5593cb0b0c48316ea2def3ed403867f6804f58a5d656167bc940e96570c7aff7,PodSandboxId:08a19e9c78e4ca549527ff183156b16931fb21ac32111f4cd5e98b9ded1c2797,Metadata:&ContainerMet
adata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710187854337812105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3beb886cd73cc800cb53572ee8b16955,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2dbca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40763da36815929044f217a915b815772c0d2cf43c36a2bff42f83c6318a46a,PodSandboxId:64f20733d9c1fa7314434944760c6bac1715987b00aabf64c77c842768630563,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Im
ageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710187854299790663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7366e334d56c19fa7026982e90836f,},Annotations:map[string]string{io.kubernetes.container.hash: dfe7758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cbf60abac7c2a2f81bac1bff8f52147cdfd994106316d9423d582b7826beff,PodSandboxId:7706f340ebd3582a03e3e567159b4c68e25b479d8b8f407b12b196f252030fd6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3d
b313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710187854268651949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a28dcdab218c66dc1fb94fb1b955517,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32dc3f9f-508e-4ec7-abe6-71280e0b2708 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.718133486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e01d3f1f-56d6-400a-8585-3e36054179f5 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.718207161Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e01d3f1f-56d6-400a-8585-3e36054179f5 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.719422158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83f30ce4-77eb-44a2-a20c-733b13d33b2d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.720578421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710188135720555230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563325,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83f30ce4-77eb-44a2-a20c-733b13d33b2d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.721315719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b60d0a42-050b-48ee-8e61-898b01aa04f1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.721398036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b60d0a42-050b-48ee-8e61-898b01aa04f1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.721822634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:949d655b6adad0e70f44f8048baf19f648b6f160c04fdd7ae3d75f2ba165bbe4,PodSandboxId:259572018db0358570b70172d69a11f30d4405d97e978284febea223b02a2ce2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710188127475575102,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-d8jdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 880146a6-e693-44b3-9453-0c3dfc82e4a6,},Annotations:map[string]string{io.kubernetes.container.hash: 6de59312,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89880dcafa47dfe2752931f85156e5d1bc679539dc46fe04d6792fb2ce76e0f,PodSandboxId:56b48f8ecf0a2cf8041dee427092b69a9336fa950218d983b91d74d162754d0c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710188004640166905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-9hb9p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 67df60ae-ad11-4767-b3eb-ccfceb9799a9,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d10c557d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c659fdc24ce30c9f825069c19cb5273f57692dc8dfbf1e1709355ebf2ba72444,PodSandboxId:fe08a3d7b3846f87747a791a6a315a0e65a7661af3a3b65165fc93389c2a0a80,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710187988852766042,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: a141ff9e-e505-4bd1-ac33-95eb2183ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 58cbe66,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabb315f957c64d36628aa18c064abebf8f3d67179673558982b540ab45139e7,PodSandboxId:509b89ed2e606cc4e7445fae10c6338735831bb18310d94df6bb1f5b9f1df2ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1710187957644888234,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-5lxzk,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9677c2c3-7bb3-4fc5-a683-6d496f54593e,},Annotations:map[string]string{io.kubernetes.container.hash: 4750faf6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46646564686ce95743639d3cbebd586e920c24e9b1d7bbf9bff4898e807945d0,PodSandboxId:d72b56b69121138704370d5c7de70c45885dc44019d0587e84aadb3ed43e3719,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAIN
ER_EXITED,CreatedAt:1710187934381908974,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7zgj9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c70aed1-fd7a-4598-b3ce-3d5d04bc62c6,},Annotations:map[string]string{io.kubernetes.container.hash: af80f792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7494df3f8363aac6a872128d1587f67aa71ff661c83cd9fd6625b560987ec35c,PodSandboxId:2db2b5102a05283d3910c5aace31a6189de801e5e72a9fa3f349a24e7f5693d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a
1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710187934265226019,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7zt97,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff650163-9820-4984-80be-e98af6572e34,},Annotations:map[string]string{io.kubernetes.container.hash: 2706cbc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59c49deef4d2c1addb7bec2cdb6963f27ddbbd7bdb20597018d38ca6653eba5,PodSandboxId:9a4a5bb81f7716027dc13c062fee27d6d57b9eb220fd90f0412924e4715d8618,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710187929824656990,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-ttwqt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb8f70d5-1cf3-44eb-89cf-5d530bebda0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9431b0d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc893e6380cb372874f04c9704be7997538b6195be2c9a39db45c8d3a25df99,PodSandboxId:d0e7dc8b0e7b0c5fe39c9c4031a7b956e9a11ab2304d0c19c34bae408cd7d242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710187881232514066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49225e1-b1da-4ad7-bdd2-655ce1760e47,},Annotations:map[string]string{io.kubernetes.container.hash: 2f5e6333,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21ccf11f2216269811f5e7e4717e76f14289e4e2691562b14ecb6696536b4d2,PodSandboxId:dde9cf57f0ff60559dec12104c36f9301fc4b0c5e8220521d0de90f97187f5ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710187876360411680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hmxgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919dc44-37b7-44f5-a43d-6c181b3205d2,},Annotations:map[string]string{io.kubernetes.container.hash: 15693c44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b454639944f8749341ef1de1d9662b0a72d1500ff95f7a9bbd8a9dc93543f75,PodSandboxId:5d588a899805c1b175c4b7f784d690f01749a7ec9b8178c5747483b6f3f01bea
,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710187876156799145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-875cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff8a34e-a286-44af-b9b4-d58337259a79,},Annotations:map[string]string{io.kubernetes.container.hash: c34cbd62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69ba60484698ed657204d9ec1e69ac44072f72bf176f465e052e471fe3a06ff,PodSandboxId:a7b0900d4a81c3a208b2dd1431160613ac641676d00c717be0ca9110bb9d6808,Metadata:&ContainerMetadata{Name
:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710187854361110899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617f297a7d9b89d28d441dd49ca22783,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5593cb0b0c48316ea2def3ed403867f6804f58a5d656167bc940e96570c7aff7,PodSandboxId:08a19e9c78e4ca549527ff183156b16931fb21ac32111f4cd5e98b9ded1c2797,Metadata:&ContainerMet
adata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710187854337812105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3beb886cd73cc800cb53572ee8b16955,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2dbca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40763da36815929044f217a915b815772c0d2cf43c36a2bff42f83c6318a46a,PodSandboxId:64f20733d9c1fa7314434944760c6bac1715987b00aabf64c77c842768630563,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Im
ageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710187854299790663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7366e334d56c19fa7026982e90836f,},Annotations:map[string]string{io.kubernetes.container.hash: dfe7758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cbf60abac7c2a2f81bac1bff8f52147cdfd994106316d9423d582b7826beff,PodSandboxId:7706f340ebd3582a03e3e567159b4c68e25b479d8b8f407b12b196f252030fd6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3d
b313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710187854268651949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a28dcdab218c66dc1fb94fb1b955517,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b60d0a42-050b-48ee-8e61-898b01aa04f1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.757866999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a670b8e-b6fe-4150-ae94-284d24b75d1b name=/runtime.v1.RuntimeService/Version
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.757940678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a670b8e-b6fe-4150-ae94-284d24b75d1b name=/runtime.v1.RuntimeService/Version
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.759705748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd6f0059-6e2a-4157-aebc-ea65c5a92e6e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.761096339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710188135761071417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563325,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd6f0059-6e2a-4157-aebc-ea65c5a92e6e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.761780704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=677a20b9-599b-4686-abbe-e78daa0974e4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.761835066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=677a20b9-599b-4686-abbe-e78daa0974e4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.762109184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:949d655b6adad0e70f44f8048baf19f648b6f160c04fdd7ae3d75f2ba165bbe4,PodSandboxId:259572018db0358570b70172d69a11f30d4405d97e978284febea223b02a2ce2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710188127475575102,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-d8jdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 880146a6-e693-44b3-9453-0c3dfc82e4a6,},Annotations:map[string]string{io.kubernetes.container.hash: 6de59312,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89880dcafa47dfe2752931f85156e5d1bc679539dc46fe04d6792fb2ce76e0f,PodSandboxId:56b48f8ecf0a2cf8041dee427092b69a9336fa950218d983b91d74d162754d0c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710188004640166905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-9hb9p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 67df60ae-ad11-4767-b3eb-ccfceb9799a9,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d10c557d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c659fdc24ce30c9f825069c19cb5273f57692dc8dfbf1e1709355ebf2ba72444,PodSandboxId:fe08a3d7b3846f87747a791a6a315a0e65a7661af3a3b65165fc93389c2a0a80,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710187988852766042,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: a141ff9e-e505-4bd1-ac33-95eb2183ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 58cbe66,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabb315f957c64d36628aa18c064abebf8f3d67179673558982b540ab45139e7,PodSandboxId:509b89ed2e606cc4e7445fae10c6338735831bb18310d94df6bb1f5b9f1df2ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1710187957644888234,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-5lxzk,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9677c2c3-7bb3-4fc5-a683-6d496f54593e,},Annotations:map[string]string{io.kubernetes.container.hash: 4750faf6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46646564686ce95743639d3cbebd586e920c24e9b1d7bbf9bff4898e807945d0,PodSandboxId:d72b56b69121138704370d5c7de70c45885dc44019d0587e84aadb3ed43e3719,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAIN
ER_EXITED,CreatedAt:1710187934381908974,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7zgj9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c70aed1-fd7a-4598-b3ce-3d5d04bc62c6,},Annotations:map[string]string{io.kubernetes.container.hash: af80f792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7494df3f8363aac6a872128d1587f67aa71ff661c83cd9fd6625b560987ec35c,PodSandboxId:2db2b5102a05283d3910c5aace31a6189de801e5e72a9fa3f349a24e7f5693d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a
1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710187934265226019,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7zt97,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff650163-9820-4984-80be-e98af6572e34,},Annotations:map[string]string{io.kubernetes.container.hash: 2706cbc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59c49deef4d2c1addb7bec2cdb6963f27ddbbd7bdb20597018d38ca6653eba5,PodSandboxId:9a4a5bb81f7716027dc13c062fee27d6d57b9eb220fd90f0412924e4715d8618,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710187929824656990,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-ttwqt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb8f70d5-1cf3-44eb-89cf-5d530bebda0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9431b0d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc893e6380cb372874f04c9704be7997538b6195be2c9a39db45c8d3a25df99,PodSandboxId:d0e7dc8b0e7b0c5fe39c9c4031a7b956e9a11ab2304d0c19c34bae408cd7d242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710187881232514066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49225e1-b1da-4ad7-bdd2-655ce1760e47,},Annotations:map[string]string{io.kubernetes.container.hash: 2f5e6333,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21ccf11f2216269811f5e7e4717e76f14289e4e2691562b14ecb6696536b4d2,PodSandboxId:dde9cf57f0ff60559dec12104c36f9301fc4b0c5e8220521d0de90f97187f5ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710187876360411680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hmxgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919dc44-37b7-44f5-a43d-6c181b3205d2,},Annotations:map[string]string{io.kubernetes.container.hash: 15693c44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b454639944f8749341ef1de1d9662b0a72d1500ff95f7a9bbd8a9dc93543f75,PodSandboxId:5d588a899805c1b175c4b7f784d690f01749a7ec9b8178c5747483b6f3f01bea
,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710187876156799145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-875cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff8a34e-a286-44af-b9b4-d58337259a79,},Annotations:map[string]string{io.kubernetes.container.hash: c34cbd62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69ba60484698ed657204d9ec1e69ac44072f72bf176f465e052e471fe3a06ff,PodSandboxId:a7b0900d4a81c3a208b2dd1431160613ac641676d00c717be0ca9110bb9d6808,Metadata:&ContainerMetadata{Name
:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710187854361110899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617f297a7d9b89d28d441dd49ca22783,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5593cb0b0c48316ea2def3ed403867f6804f58a5d656167bc940e96570c7aff7,PodSandboxId:08a19e9c78e4ca549527ff183156b16931fb21ac32111f4cd5e98b9ded1c2797,Metadata:&ContainerMet
adata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710187854337812105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3beb886cd73cc800cb53572ee8b16955,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2dbca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40763da36815929044f217a915b815772c0d2cf43c36a2bff42f83c6318a46a,PodSandboxId:64f20733d9c1fa7314434944760c6bac1715987b00aabf64c77c842768630563,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Im
ageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710187854299790663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7366e334d56c19fa7026982e90836f,},Annotations:map[string]string{io.kubernetes.container.hash: dfe7758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cbf60abac7c2a2f81bac1bff8f52147cdfd994106316d9423d582b7826beff,PodSandboxId:7706f340ebd3582a03e3e567159b4c68e25b479d8b8f407b12b196f252030fd6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3d
b313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710187854268651949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a28dcdab218c66dc1fb94fb1b955517,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=677a20b9-599b-4686-abbe-e78daa0974e4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.808821901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a511171-2b50-41f3-a166-b8028d2cd6d8 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.808924489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a511171-2b50-41f3-a166-b8028d2cd6d8 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.811082950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=682396a4-3d5c-4681-9ba4-08296c4279a0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.813691043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710188135813665865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563325,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=682396a4-3d5c-4681-9ba4-08296c4279a0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.814767011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c26842d0-abd0-454d-83d6-9312b07991c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.814837309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c26842d0-abd0-454d-83d6-9312b07991c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:15:35 addons-118179 crio[678]: time="2024-03-11 20:15:35.815120557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:949d655b6adad0e70f44f8048baf19f648b6f160c04fdd7ae3d75f2ba165bbe4,PodSandboxId:259572018db0358570b70172d69a11f30d4405d97e978284febea223b02a2ce2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710188127475575102,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-d8jdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 880146a6-e693-44b3-9453-0c3dfc82e4a6,},Annotations:map[string]string{io.kubernetes.container.hash: 6de59312,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89880dcafa47dfe2752931f85156e5d1bc679539dc46fe04d6792fb2ce76e0f,PodSandboxId:56b48f8ecf0a2cf8041dee427092b69a9336fa950218d983b91d74d162754d0c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710188004640166905,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-9hb9p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 67df60ae-ad11-4767-b3eb-ccfceb9799a9,},Annotat
ions:map[string]string{io.kubernetes.container.hash: d10c557d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c659fdc24ce30c9f825069c19cb5273f57692dc8dfbf1e1709355ebf2ba72444,PodSandboxId:fe08a3d7b3846f87747a791a6a315a0e65a7661af3a3b65165fc93389c2a0a80,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710187988852766042,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: a141ff9e-e505-4bd1-ac33-95eb2183ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 58cbe66,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabb315f957c64d36628aa18c064abebf8f3d67179673558982b540ab45139e7,PodSandboxId:509b89ed2e606cc4e7445fae10c6338735831bb18310d94df6bb1f5b9f1df2ca,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1710187957644888234,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-5lxzk,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9677c2c3-7bb3-4fc5-a683-6d496f54593e,},Annotations:map[string]string{io.kubernetes.container.hash: 4750faf6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46646564686ce95743639d3cbebd586e920c24e9b1d7bbf9bff4898e807945d0,PodSandboxId:d72b56b69121138704370d5c7de70c45885dc44019d0587e84aadb3ed43e3719,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAIN
ER_EXITED,CreatedAt:1710187934381908974,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7zgj9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2c70aed1-fd7a-4598-b3ce-3d5d04bc62c6,},Annotations:map[string]string{io.kubernetes.container.hash: af80f792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7494df3f8363aac6a872128d1587f67aa71ff661c83cd9fd6625b560987ec35c,PodSandboxId:2db2b5102a05283d3910c5aace31a6189de801e5e72a9fa3f349a24e7f5693d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a
1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710187934265226019,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7zt97,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff650163-9820-4984-80be-e98af6572e34,},Annotations:map[string]string{io.kubernetes.container.hash: 2706cbc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59c49deef4d2c1addb7bec2cdb6963f27ddbbd7bdb20597018d38ca6653eba5,PodSandboxId:9a4a5bb81f7716027dc13c062fee27d6d57b9eb220fd90f0412924e4715d8618,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710187929824656990,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-ttwqt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb8f70d5-1cf3-44eb-89cf-5d530bebda0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9431b0d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc893e6380cb372874f04c9704be7997538b6195be2c9a39db45c8d3a25df99,PodSandboxId:d0e7dc8b0e7b0c5fe39c9c4031a7b956e9a11ab2304d0c19c34bae408cd7d242,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710187881232514066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c49225e1-b1da-4ad7-bdd2-655ce1760e47,},Annotations:map[string]string{io.kubernetes.container.hash: 2f5e6333,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a21ccf11f2216269811f5e7e4717e76f14289e4e2691562b14ecb6696536b4d2,PodSandboxId:dde9cf57f0ff60559dec12104c36f9301fc4b0c5e8220521d0de90f97187f5ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710187876360411680,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hmxgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919dc44-37b7-44f5-a43d-6c181b3205d2,},Annotations:map[string]string{io.kubernetes.container.hash: 15693c44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b454639944f8749341ef1de1d9662b0a72d1500ff95f7a9bbd8a9dc93543f75,PodSandboxId:5d588a899805c1b175c4b7f784d690f01749a7ec9b8178c5747483b6f3f01bea
,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710187876156799145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-875cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff8a34e-a286-44af-b9b4-d58337259a79,},Annotations:map[string]string{io.kubernetes.container.hash: c34cbd62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69ba60484698ed657204d9ec1e69ac44072f72bf176f465e052e471fe3a06ff,PodSandboxId:a7b0900d4a81c3a208b2dd1431160613ac641676d00c717be0ca9110bb9d6808,Metadata:&ContainerMetadata{Name
:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710187854361110899,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617f297a7d9b89d28d441dd49ca22783,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5593cb0b0c48316ea2def3ed403867f6804f58a5d656167bc940e96570c7aff7,PodSandboxId:08a19e9c78e4ca549527ff183156b16931fb21ac32111f4cd5e98b9ded1c2797,Metadata:&ContainerMet
adata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710187854337812105,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3beb886cd73cc800cb53572ee8b16955,},Annotations:map[string]string{io.kubernetes.container.hash: 9a2dbca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40763da36815929044f217a915b815772c0d2cf43c36a2bff42f83c6318a46a,PodSandboxId:64f20733d9c1fa7314434944760c6bac1715987b00aabf64c77c842768630563,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&Im
ageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710187854299790663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7366e334d56c19fa7026982e90836f,},Annotations:map[string]string{io.kubernetes.container.hash: dfe7758e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cbf60abac7c2a2f81bac1bff8f52147cdfd994106316d9423d582b7826beff,PodSandboxId:7706f340ebd3582a03e3e567159b4c68e25b479d8b8f407b12b196f252030fd6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3d
b313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710187854268651949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a28dcdab218c66dc1fb94fb1b955517,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c26842d0-abd0-454d-83d6-9312b07991c6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	949d655b6adad       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   259572018db03       hello-world-app-5d77478584-d8jdt
	c89880dcafa47       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        2 minutes ago       Running             headlamp                  0                   56b48f8ecf0a2       headlamp-5485c556b-9hb9p
	c659fdc24ce30       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago       Running             nginx                     0                   fe08a3d7b3846       nginx
	fabb315f957c6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 2 minutes ago       Running             gcp-auth                  0                   509b89ed2e606       gcp-auth-5f6b4f85fd-5lxzk
	46646564686ce       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   d72b56b691211       ingress-nginx-admission-patch-7zgj9
	7494df3f8363a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   2db2b5102a052       ingress-nginx-admission-create-7zt97
	f59c49deef4d2       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   9a4a5bb81f771       yakd-dashboard-9947fc6bf-ttwqt
	acc893e6380cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   d0e7dc8b0e7b0       storage-provisioner
	a21ccf11f2216       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   dde9cf57f0ff6       coredns-5dd5756b68-hmxgl
	4b454639944f8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   5d588a899805c       kube-proxy-875cw
	b69ba60484698       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   a7b0900d4a81c       kube-controller-manager-addons-118179
	5593cb0b0c483       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   08a19e9c78e4c       etcd-addons-118179
	b40763da36815       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   64f20733d9c1f       kube-apiserver-addons-118179
	a6cbf60abac7c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   7706f340ebd35       kube-scheduler-addons-118179
	
	
	==> coredns [a21ccf11f2216269811f5e7e4717e76f14289e4e2691562b14ecb6696536b4d2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:41463 - 61442 "HINFO IN 2954479974659198715.976261927802763797. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008185958s
	[INFO] 10.244.0.22:35927 - 39235 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314033s
	[INFO] 10.244.0.22:40422 - 45367 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000168062s
	[INFO] 10.244.0.22:40167 - 44943 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151113s
	[INFO] 10.244.0.22:39620 - 51514 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140327s
	[INFO] 10.244.0.22:56967 - 5505 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150293s
	[INFO] 10.244.0.22:41461 - 38828 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083918s
	[INFO] 10.244.0.22:37657 - 56426 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000704613s
	[INFO] 10.244.0.22:53958 - 48798 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001022014s
	[INFO] 10.244.0.26:57020 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00028001s
	[INFO] 10.244.0.26:41702 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138114s
	
	
	==> describe nodes <==
	Name:               addons-118179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-118179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=addons-118179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T20_11_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-118179
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:10:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-118179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:15:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:13:33 +0000   Mon, 11 Mar 2024 20:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:13:33 +0000   Mon, 11 Mar 2024 20:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:13:33 +0000   Mon, 11 Mar 2024 20:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:13:33 +0000   Mon, 11 Mar 2024 20:11:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-118179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 97de979deb4941bdaba47e2ed1ff5eb1
	  System UUID:                97de979d-eb49-41bd-aba4-7e2ed1ff5eb1
	  Boot ID:                    e6b567a9-4e9f-4464-b22d-aa14dacb22fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-d8jdt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5f6b4f85fd-5lxzk                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  headlamp                    headlamp-5485c556b-9hb9p                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 coredns-5dd5756b68-hmxgl                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m23s
	  kube-system                 etcd-addons-118179                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-apiserver-addons-118179             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-controller-manager-addons-118179    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-proxy-875cw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-scheduler-addons-118179             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-ttwqt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m18s  kube-proxy       
	  Normal  Starting                 4m36s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s  kubelet          Node addons-118179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s  kubelet          Node addons-118179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s  kubelet          Node addons-118179 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m35s  kubelet          Node addons-118179 status is now: NodeReady
	  Normal  RegisteredNode           4m24s  node-controller  Node addons-118179 event: Registered Node addons-118179 in Controller
	
	
	==> dmesg <==
	[Mar11 20:11] systemd-fstab-generator[1489]: Ignoring "noauto" option for root device
	[  +0.196044] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.407553] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.118958] kauditd_printk_skb: 116 callbacks suppressed
	[  +5.015560] kauditd_printk_skb: 66 callbacks suppressed
	[  +8.413338] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.774989] kauditd_printk_skb: 9 callbacks suppressed
	[Mar11 20:12] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.178103] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.472752] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.211852] kauditd_printk_skb: 61 callbacks suppressed
	[  +6.918564] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.265358] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.900310] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.046830] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.058863] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.173088] kauditd_printk_skb: 39 callbacks suppressed
	[Mar11 20:13] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.564192] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.429406] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.985091] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.483181] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.499562] kauditd_printk_skb: 25 callbacks suppressed
	[Mar11 20:15] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.312433] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [5593cb0b0c48316ea2def3ed403867f6804f58a5d656167bc940e96570c7aff7] <==
	{"level":"warn","ts":"2024-03-11T20:12:32.602623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.416859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-03-11T20:12:32.602676Z","caller":"traceutil/trace.go:171","msg":"trace[1070327151] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1133; }","duration":"254.472096ms","start":"2024-03-11T20:12:32.348197Z","end":"2024-03-11T20:12:32.602669Z","steps":["trace[1070327151] 'agreement among raft nodes before linearized reading'  (duration: 254.393343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:12:32.602797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.801677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10811"}
	{"level":"info","ts":"2024-03-11T20:12:32.603397Z","caller":"traceutil/trace.go:171","msg":"trace[1940215796] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1133; }","duration":"250.404174ms","start":"2024-03-11T20:12:32.352982Z","end":"2024-03-11T20:12:32.603386Z","steps":["trace[1940215796] 'agreement among raft nodes before linearized reading'  (duration: 249.75295ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:12:32.60317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.774993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-11T20:12:32.603712Z","caller":"traceutil/trace.go:171","msg":"trace[2030558379] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1133; }","duration":"212.316676ms","start":"2024-03-11T20:12:32.391387Z","end":"2024-03-11T20:12:32.603704Z","steps":["trace[2030558379] 'agreement among raft nodes before linearized reading'  (duration: 211.763378ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:12:35.532993Z","caller":"traceutil/trace.go:171","msg":"trace[1706282457] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"180.925532ms","start":"2024-03-11T20:12:35.352055Z","end":"2024-03-11T20:12:35.53298Z","steps":["trace[1706282457] 'read index received'  (duration: 180.788988ms)","trace[1706282457] 'applied index is now lower than readState.Index'  (duration: 136.11µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:12:35.533192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.132922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10811"}
	{"level":"info","ts":"2024-03-11T20:12:35.533482Z","caller":"traceutil/trace.go:171","msg":"trace[1107699506] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"384.038055ms","start":"2024-03-11T20:12:35.149434Z","end":"2024-03-11T20:12:35.533472Z","steps":["trace[1107699506] 'process raft request'  (duration: 383.447164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:12:35.533583Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T20:12:35.14942Z","time spent":"384.114459ms","remote":"127.0.0.1:49952","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1130 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-03-11T20:12:35.533223Z","caller":"traceutil/trace.go:171","msg":"trace[1416538713] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1138; }","duration":"181.180392ms","start":"2024-03-11T20:12:35.352031Z","end":"2024-03-11T20:12:35.533212Z","steps":["trace[1416538713] 'agreement among raft nodes before linearized reading'  (duration: 181.031868ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:13:06.378716Z","caller":"traceutil/trace.go:171","msg":"trace[1112320945] linearizableReadLoop","detail":"{readStateIndex:1484; appliedIndex:1483; }","duration":"125.026649ms","start":"2024-03-11T20:13:06.25367Z","end":"2024-03-11T20:13:06.378696Z","steps":["trace[1112320945] 'read index received'  (duration: 124.86567ms)","trace[1112320945] 'applied index is now lower than readState.Index'  (duration: 160.267µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:13:06.378949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.274476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8582"}
	{"level":"info","ts":"2024-03-11T20:13:06.378987Z","caller":"traceutil/trace.go:171","msg":"trace[1457777233] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1437; }","duration":"125.32758ms","start":"2024-03-11T20:13:06.253647Z","end":"2024-03-11T20:13:06.378974Z","steps":["trace[1457777233] 'agreement among raft nodes before linearized reading'  (duration: 125.182206ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:13:06.379442Z","caller":"traceutil/trace.go:171","msg":"trace[1427810407] transaction","detail":"{read_only:false; response_revision:1437; number_of_response:1; }","duration":"130.685901ms","start":"2024-03-11T20:13:06.248743Z","end":"2024-03-11T20:13:06.379429Z","steps":["trace[1427810407] 'process raft request'  (duration: 129.838842ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:13:06.612466Z","caller":"traceutil/trace.go:171","msg":"trace[747566255] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1438; }","duration":"216.66925ms","start":"2024-03-11T20:13:06.395784Z","end":"2024-03-11T20:13:06.612453Z","steps":["trace[747566255] 'process raft request'  (duration: 195.385979ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:13:11.862366Z","caller":"traceutil/trace.go:171","msg":"trace[1373435018] linearizableReadLoop","detail":"{readStateIndex:1535; appliedIndex:1534; }","duration":"256.688791ms","start":"2024-03-11T20:13:11.605664Z","end":"2024-03-11T20:13:11.862353Z","steps":["trace[1373435018] 'read index received'  (duration: 256.279474ms)","trace[1373435018] 'applied index is now lower than readState.Index'  (duration: 408.645µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:13:11.862494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.830091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T20:13:11.862516Z","caller":"traceutil/trace.go:171","msg":"trace[912409897] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1486; }","duration":"256.864874ms","start":"2024-03-11T20:13:11.605645Z","end":"2024-03-11T20:13:11.86251Z","steps":["trace[912409897] 'agreement among raft nodes before linearized reading'  (duration: 256.786322ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:13:11.862715Z","caller":"traceutil/trace.go:171","msg":"trace[1240341623] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1486; }","duration":"278.697637ms","start":"2024-03-11T20:13:11.584011Z","end":"2024-03-11T20:13:11.862709Z","steps":["trace[1240341623] 'process raft request'  (duration: 277.967375ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:13:22.989099Z","caller":"traceutil/trace.go:171","msg":"trace[1637292914] transaction","detail":"{read_only:false; response_revision:1587; number_of_response:1; }","duration":"225.947343ms","start":"2024-03-11T20:13:22.76313Z","end":"2024-03-11T20:13:22.989077Z","steps":["trace[1637292914] 'process raft request'  (duration: 225.801424ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:13:22.990759Z","caller":"traceutil/trace.go:171","msg":"trace[1097862218] linearizableReadLoop","detail":"{readStateIndex:1639; appliedIndex:1639; }","duration":"148.547941ms","start":"2024-03-11T20:13:22.842198Z","end":"2024-03-11T20:13:22.990746Z","steps":["trace[1097862218] 'read index received'  (duration: 148.543737ms)","trace[1097862218] 'applied index is now lower than readState.Index'  (duration: 3.368µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:13:22.990908Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.666521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5637"}
	{"level":"info","ts":"2024-03-11T20:13:22.990933Z","caller":"traceutil/trace.go:171","msg":"trace[1472339034] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1587; }","duration":"148.751795ms","start":"2024-03-11T20:13:22.842174Z","end":"2024-03-11T20:13:22.990926Z","steps":["trace[1472339034] 'agreement among raft nodes before linearized reading'  (duration: 148.625917ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:13:58.923447Z","caller":"traceutil/trace.go:171","msg":"trace[363224028] transaction","detail":"{read_only:false; response_revision:1760; number_of_response:1; }","duration":"123.202714ms","start":"2024-03-11T20:13:58.800215Z","end":"2024-03-11T20:13:58.923418Z","steps":["trace[363224028] 'process raft request'  (duration: 121.583381ms)"],"step_count":1}
	
	
	==> gcp-auth [fabb315f957c64d36628aa18c064abebf8f3d67179673558982b540ab45139e7] <==
	2024/03/11 20:12:37 GCP Auth Webhook started!
	2024/03/11 20:12:39 Ready to marshal response ...
	2024/03/11 20:12:39 Ready to write response ...
	2024/03/11 20:12:39 Ready to marshal response ...
	2024/03/11 20:12:39 Ready to write response ...
	2024/03/11 20:12:47 Ready to marshal response ...
	2024/03/11 20:12:47 Ready to write response ...
	2024/03/11 20:12:49 Ready to marshal response ...
	2024/03/11 20:12:49 Ready to write response ...
	2024/03/11 20:12:51 Ready to marshal response ...
	2024/03/11 20:12:51 Ready to write response ...
	2024/03/11 20:13:01 Ready to marshal response ...
	2024/03/11 20:13:01 Ready to write response ...
	2024/03/11 20:13:02 Ready to marshal response ...
	2024/03/11 20:13:02 Ready to write response ...
	2024/03/11 20:13:19 Ready to marshal response ...
	2024/03/11 20:13:19 Ready to write response ...
	2024/03/11 20:13:19 Ready to marshal response ...
	2024/03/11 20:13:19 Ready to write response ...
	2024/03/11 20:13:19 Ready to marshal response ...
	2024/03/11 20:13:19 Ready to write response ...
	2024/03/11 20:13:21 Ready to marshal response ...
	2024/03/11 20:13:21 Ready to write response ...
	2024/03/11 20:15:25 Ready to marshal response ...
	2024/03/11 20:15:25 Ready to write response ...
	
	
	==> kernel <==
	 20:15:36 up 5 min,  0 users,  load average: 1.24, 1.57, 0.79
	Linux addons-118179 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b40763da36815929044f217a915b815772c0d2cf43c36a2bff42f83c6318a46a] <==
	I0311 20:13:06.694400       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0311 20:13:07.736133       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0311 20:13:08.840339       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0311 20:13:14.321165       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0311 20:13:19.142738       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.162.117"}
	I0311 20:13:40.181840       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:13:40.182063       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:13:40.196504       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:13:40.196575       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:13:40.204543       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:13:40.204604       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:13:40.229364       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:13:40.229424       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:13:40.250943       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:13:40.251030       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:13:40.252123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:13:40.252205       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:13:40.300693       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:13:40.300765       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:13:40.302786       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:13:40.303029       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0311 20:13:41.205355       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0311 20:13:41.303273       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0311 20:13:41.348751       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0311 20:15:25.494498       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.132.187"}
	
	
	==> kube-controller-manager [b69ba60484698ed657204d9ec1e69ac44072f72bf176f465e052e471fe3a06ff] <==
	W0311 20:14:21.552917       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:14:21.552968       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:14:42.085690       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:14:42.085767       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:14:56.177518       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:14:56.177620       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:15:02.670430       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:02.670489       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:15:09.299830       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:09.299940       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:15:23.362382       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:23.362422       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 20:15:25.303481       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0311 20:15:25.345011       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-d8jdt"
	I0311 20:15:25.360745       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.370225ms"
	I0311 20:15:25.373481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.65878ms"
	I0311 20:15:25.374210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.857µs"
	I0311 20:15:25.393089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="62.633µs"
	I0311 20:15:27.766791       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0311 20:15:27.768642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="5.79µs"
	I0311 20:15:27.777566       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0311 20:15:27.897724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.236437ms"
	I0311 20:15:27.898643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="101.811µs"
	W0311 20:15:32.589959       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:32.590027       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [4b454639944f8749341ef1de1d9662b0a72d1500ff95f7a9bbd8a9dc93543f75] <==
	I0311 20:11:16.998468       1 server_others.go:69] "Using iptables proxy"
	I0311 20:11:17.016552       1 node.go:141] Successfully retrieved node IP: 192.168.39.50
	I0311 20:11:17.207132       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:11:17.207177       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:11:17.231891       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:11:17.231951       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:11:17.232128       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:11:17.232137       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:11:17.233153       1 config.go:188] "Starting service config controller"
	I0311 20:11:17.233161       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:11:17.233175       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:11:17.233178       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:11:17.236802       1 config.go:315] "Starting node config controller"
	I0311 20:11:17.236861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:11:17.334789       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:11:17.334896       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 20:11:17.342644       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a6cbf60abac7c2a2f81bac1bff8f52147cdfd994106316d9423d582b7826beff] <==
	W0311 20:10:57.115761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 20:10:57.115801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 20:10:57.115535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 20:10:57.115984       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 20:10:57.116056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 20:10:57.116085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 20:10:58.062435       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 20:10:58.062607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 20:10:58.071503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 20:10:58.072455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 20:10:58.192529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:10:58.192669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 20:10:58.199409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 20:10:58.199472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 20:10:58.221512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 20:10:58.221582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 20:10:58.224205       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 20:10:58.224371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 20:10:58.236149       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 20:10:58.236285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 20:10:58.350624       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 20:10:58.350675       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 20:10:58.352965       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 20:10:58.353014       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0311 20:11:00.090341       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 20:15:25 addons-118179 kubelet[1285]: I0311 20:15:25.360008    1285 memory_manager.go:346] "RemoveStaleState removing state" podUID="01207a1b-381b-4846-ae53-0191c8174769" containerName="volume-snapshot-controller"
	Mar 11 20:15:25 addons-118179 kubelet[1285]: I0311 20:15:25.360118    1285 memory_manager.go:346] "RemoveStaleState removing state" podUID="f6549098-66e9-4d4b-b258-5f76c90b0a35" containerName="volume-snapshot-controller"
	Mar 11 20:15:25 addons-118179 kubelet[1285]: I0311 20:15:25.360154    1285 memory_manager.go:346] "RemoveStaleState removing state" podUID="98e28188-80ac-4355-9b78-ab9b382862fc" containerName="csi-resizer"
	Mar 11 20:15:25 addons-118179 kubelet[1285]: I0311 20:15:25.360343    1285 memory_manager.go:346] "RemoveStaleState removing state" podUID="b75fc6d5-7127-475d-98eb-5aef17d18407" containerName="csi-attacher"
	Mar 11 20:15:25 addons-118179 kubelet[1285]: I0311 20:15:25.360451    1285 memory_manager.go:346] "RemoveStaleState removing state" podUID="0d926afa-937c-4e6b-aa6f-e85805d579b5" containerName="hostpath"
	Mar 11 20:15:25 addons-118179 kubelet[1285]: I0311 20:15:25.473068    1285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6m8\" (UniqueName: \"kubernetes.io/projected/880146a6-e693-44b3-9453-0c3dfc82e4a6-kube-api-access-cf6m8\") pod \"hello-world-app-5d77478584-d8jdt\" (UID: \"880146a6-e693-44b3-9453-0c3dfc82e4a6\") " pod="default/hello-world-app-5d77478584-d8jdt"
	Mar 11 20:15:25 addons-118179 kubelet[1285]: I0311 20:15:25.473447    1285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/880146a6-e693-44b3-9453-0c3dfc82e4a6-gcp-creds\") pod \"hello-world-app-5d77478584-d8jdt\" (UID: \"880146a6-e693-44b3-9453-0c3dfc82e4a6\") " pod="default/hello-world-app-5d77478584-d8jdt"
	Mar 11 20:15:26 addons-118179 kubelet[1285]: I0311 20:15:26.796610    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wc42\" (UniqueName: \"kubernetes.io/projected/f473aa81-6f8d-4fe8-af58-2b497e88e3a0-kube-api-access-7wc42\") pod \"f473aa81-6f8d-4fe8-af58-2b497e88e3a0\" (UID: \"f473aa81-6f8d-4fe8-af58-2b497e88e3a0\") "
	Mar 11 20:15:26 addons-118179 kubelet[1285]: I0311 20:15:26.808687    1285 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f473aa81-6f8d-4fe8-af58-2b497e88e3a0-kube-api-access-7wc42" (OuterVolumeSpecName: "kube-api-access-7wc42") pod "f473aa81-6f8d-4fe8-af58-2b497e88e3a0" (UID: "f473aa81-6f8d-4fe8-af58-2b497e88e3a0"). InnerVolumeSpecName "kube-api-access-7wc42". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 20:15:26 addons-118179 kubelet[1285]: I0311 20:15:26.823951    1285 scope.go:117] "RemoveContainer" containerID="748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11"
	Mar 11 20:15:26 addons-118179 kubelet[1285]: I0311 20:15:26.897179    1285 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7wc42\" (UniqueName: \"kubernetes.io/projected/f473aa81-6f8d-4fe8-af58-2b497e88e3a0-kube-api-access-7wc42\") on node \"addons-118179\" DevicePath \"\""
	Mar 11 20:15:27 addons-118179 kubelet[1285]: I0311 20:15:27.068586    1285 scope.go:117] "RemoveContainer" containerID="748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11"
	Mar 11 20:15:27 addons-118179 kubelet[1285]: E0311 20:15:27.070577    1285 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11\": container with ID starting with 748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11 not found: ID does not exist" containerID="748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11"
	Mar 11 20:15:27 addons-118179 kubelet[1285]: I0311 20:15:27.070621    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11"} err="failed to get container status \"748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11\": rpc error: code = NotFound desc = could not find container \"748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11\": container with ID starting with 748cffe02a937b6d6fd54fffdfb9011521bcc83e15d3145e329aa88b5e7afb11 not found: ID does not exist"
	Mar 11 20:15:28 addons-118179 kubelet[1285]: I0311 20:15:28.465931    1285 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2c70aed1-fd7a-4598-b3ce-3d5d04bc62c6" path="/var/lib/kubelet/pods/2c70aed1-fd7a-4598-b3ce-3d5d04bc62c6/volumes"
	Mar 11 20:15:28 addons-118179 kubelet[1285]: I0311 20:15:28.466474    1285 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f473aa81-6f8d-4fe8-af58-2b497e88e3a0" path="/var/lib/kubelet/pods/f473aa81-6f8d-4fe8-af58-2b497e88e3a0/volumes"
	Mar 11 20:15:28 addons-118179 kubelet[1285]: I0311 20:15:28.466865    1285 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ff650163-9820-4984-80be-e98af6572e34" path="/var/lib/kubelet/pods/ff650163-9820-4984-80be-e98af6572e34/volumes"
	Mar 11 20:15:31 addons-118179 kubelet[1285]: I0311 20:15:31.034961    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk286\" (UniqueName: \"kubernetes.io/projected/4519cf29-5e2a-4f2c-9447-d512e7bf1b40-kube-api-access-dk286\") pod \"4519cf29-5e2a-4f2c-9447-d512e7bf1b40\" (UID: \"4519cf29-5e2a-4f2c-9447-d512e7bf1b40\") "
	Mar 11 20:15:31 addons-118179 kubelet[1285]: I0311 20:15:31.035004    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4519cf29-5e2a-4f2c-9447-d512e7bf1b40-webhook-cert\") pod \"4519cf29-5e2a-4f2c-9447-d512e7bf1b40\" (UID: \"4519cf29-5e2a-4f2c-9447-d512e7bf1b40\") "
	Mar 11 20:15:31 addons-118179 kubelet[1285]: I0311 20:15:31.037836    1285 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4519cf29-5e2a-4f2c-9447-d512e7bf1b40-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4519cf29-5e2a-4f2c-9447-d512e7bf1b40" (UID: "4519cf29-5e2a-4f2c-9447-d512e7bf1b40"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 11 20:15:31 addons-118179 kubelet[1285]: I0311 20:15:31.040484    1285 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4519cf29-5e2a-4f2c-9447-d512e7bf1b40-kube-api-access-dk286" (OuterVolumeSpecName: "kube-api-access-dk286") pod "4519cf29-5e2a-4f2c-9447-d512e7bf1b40" (UID: "4519cf29-5e2a-4f2c-9447-d512e7bf1b40"). InnerVolumeSpecName "kube-api-access-dk286". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 20:15:31 addons-118179 kubelet[1285]: I0311 20:15:31.135801    1285 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dk286\" (UniqueName: \"kubernetes.io/projected/4519cf29-5e2a-4f2c-9447-d512e7bf1b40-kube-api-access-dk286\") on node \"addons-118179\" DevicePath \"\""
	Mar 11 20:15:31 addons-118179 kubelet[1285]: I0311 20:15:31.135856    1285 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4519cf29-5e2a-4f2c-9447-d512e7bf1b40-webhook-cert\") on node \"addons-118179\" DevicePath \"\""
	Mar 11 20:15:31 addons-118179 kubelet[1285]: I0311 20:15:31.888372    1285 scope.go:117] "RemoveContainer" containerID="42d7f5136b104c60297ace5b957d09955c75f7c6f7c304643d932d978e2c83ac"
	Mar 11 20:15:32 addons-118179 kubelet[1285]: I0311 20:15:32.466491    1285 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4519cf29-5e2a-4f2c-9447-d512e7bf1b40" path="/var/lib/kubelet/pods/4519cf29-5e2a-4f2c-9447-d512e7bf1b40/volumes"
	
	
	==> storage-provisioner [acc893e6380cb372874f04c9704be7997538b6195be2c9a39db45c8d3a25df99] <==
	I0311 20:11:22.673926       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 20:11:22.733348       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 20:11:22.733481       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 20:11:22.766053       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 20:11:22.766198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-118179_ab0b97d7-52c8-4525-9355-9d55723e955a!
	I0311 20:11:22.766702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c3cf4c6-0292-4a91-b5c5-cf50d4dd23e7", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-118179_ab0b97d7-52c8-4525-9355-9d55723e955a became leader
	I0311 20:11:22.872118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-118179_ab0b97d7-52c8-4525-9355-9d55723e955a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-118179 -n addons-118179
helpers_test.go:261: (dbg) Run:  kubectl --context addons-118179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-118179
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-118179: exit status 82 (2m0.47063741s)

                                                
                                                
-- stdout --
	* Stopping node "addons-118179"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-118179" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-118179
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-118179: exit status 11 (21.626215853s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-118179" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-118179
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-118179: exit status 11 (6.144903667s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-118179" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-118179
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-118179: exit status 11 (6.141442662s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.50:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-118179" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244607 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244607 image ls --format yaml --alsologtostderr:
I0311 20:22:42.536253   27312 out.go:291] Setting OutFile to fd 1 ...
I0311 20:22:42.536384   27312 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:42.536392   27312 out.go:304] Setting ErrFile to fd 2...
I0311 20:22:42.536395   27312 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:42.536576   27312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
I0311 20:22:42.537109   27312 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:42.537203   27312 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:42.537566   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:42.537610   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:42.552055   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
I0311 20:22:42.552422   27312 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:42.552925   27312 main.go:141] libmachine: Using API Version  1
I0311 20:22:42.552949   27312 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:42.553274   27312 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:42.553493   27312 main.go:141] libmachine: (functional-244607) Calling .GetState
I0311 20:22:42.555108   27312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:42.555144   27312 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:42.569443   27312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
I0311 20:22:42.569842   27312 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:42.570272   27312 main.go:141] libmachine: Using API Version  1
I0311 20:22:42.570289   27312 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:42.570695   27312 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:42.570866   27312 main.go:141] libmachine: (functional-244607) Calling .DriverName
I0311 20:22:42.571094   27312 ssh_runner.go:195] Run: systemctl --version
I0311 20:22:42.571113   27312 main.go:141] libmachine: (functional-244607) Calling .GetSSHHostname
I0311 20:22:42.574011   27312 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:42.574410   27312 main.go:141] libmachine: (functional-244607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:1f:af", ip: ""} in network mk-functional-244607: {Iface:virbr1 ExpiryTime:2024-03-11 21:19:41 +0000 UTC Type:0 Mac:52:54:00:a3:1f:af Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-244607 Clientid:01:52:54:00:a3:1f:af}
I0311 20:22:42.574437   27312 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined IP address 192.168.39.51 and MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:42.574587   27312 main.go:141] libmachine: (functional-244607) Calling .GetSSHPort
I0311 20:22:42.574733   27312 main.go:141] libmachine: (functional-244607) Calling .GetSSHKeyPath
I0311 20:22:42.574877   27312 main.go:141] libmachine: (functional-244607) Calling .GetSSHUsername
I0311 20:22:42.575017   27312 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/functional-244607/id_rsa Username:docker}
I0311 20:22:42.705191   27312 ssh_runner.go:195] Run: sudo crictl images --output json
W0311 20:22:42.798713   27312 cache_images.go:715] Failed to list images for profile functional-244607 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0311 20:22:42.773532    7941 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = locating item named \"manifest\" for image with ID \"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a\" (consider removing the image to resolve the issue): file does not exist" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2024-03-11T20:22:42Z" level=fatal msg="listing images: rpc error: code = Unknown desc = locating item named \"manifest\" for image with ID \"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a\" (consider removing the image to resolve the issue): file does not exist"
I0311 20:22:42.798815   27312 main.go:141] libmachine: Making call to close driver server
I0311 20:22:42.798832   27312 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:42.799115   27312 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:42.799133   27312 main.go:141] libmachine: Making call to close connection to plugin binary
I0311 20:22:42.799142   27312 main.go:141] libmachine: Making call to close driver server
I0311 20:22:42.799150   27312 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:42.799362   27312 main.go:141] libmachine: (functional-244607) DBG | Closing plugin on server side
I0311 20:22:42.799443   27312 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:42.799487   27312 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
2024/03/11 20:22:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.858132353s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 image ls: (2.27217067s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-244607" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.13s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (142.14s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 node stop m02 -v=7 --alsologtostderr
E0311 20:28:06.620782   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:28:20.730857   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:29:42.651062   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.488336619s)

                                                
                                                
-- stdout --
	* Stopping node "ha-834040-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:27:58.323659   31261 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:27:58.323870   31261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:27:58.323878   31261 out.go:304] Setting ErrFile to fd 2...
	I0311 20:27:58.323885   31261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:27:58.324174   31261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:27:58.324518   31261 mustload.go:65] Loading cluster: ha-834040
	I0311 20:27:58.325010   31261 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:27:58.325036   31261 stop.go:39] StopHost: ha-834040-m02
	I0311 20:27:58.325620   31261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:27:58.325676   31261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:27:58.341122   31261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34141
	I0311 20:27:58.341638   31261 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:27:58.342231   31261 main.go:141] libmachine: Using API Version  1
	I0311 20:27:58.342259   31261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:27:58.342581   31261 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:27:58.344540   31261 out.go:177] * Stopping node "ha-834040-m02"  ...
	I0311 20:27:58.345906   31261 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0311 20:27:58.345937   31261 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:27:58.346147   31261 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0311 20:27:58.346168   31261 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:27:58.349155   31261 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:27:58.349389   31261 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:27:58.349420   31261 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:27:58.349508   31261 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:27:58.349658   31261 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:27:58.349863   31261 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:27:58.350033   31261 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:27:58.441775   31261 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0311 20:27:58.495549   31261 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0311 20:27:58.552675   31261 main.go:141] libmachine: Stopping "ha-834040-m02"...
	I0311 20:27:58.552700   31261 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:27:58.554243   31261 main.go:141] libmachine: (ha-834040-m02) Calling .Stop
	I0311 20:27:58.557568   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 0/120
	I0311 20:27:59.559294   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 1/120
	I0311 20:28:00.560623   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 2/120
	I0311 20:28:01.562173   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 3/120
	I0311 20:28:02.563850   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 4/120
	I0311 20:28:03.565719   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 5/120
	I0311 20:28:04.567215   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 6/120
	I0311 20:28:05.568525   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 7/120
	I0311 20:28:06.570427   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 8/120
	I0311 20:28:07.572317   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 9/120
	I0311 20:28:08.574467   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 10/120
	I0311 20:28:09.575752   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 11/120
	I0311 20:28:10.577704   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 12/120
	I0311 20:28:11.579447   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 13/120
	I0311 20:28:12.580852   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 14/120
	I0311 20:28:13.582730   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 15/120
	I0311 20:28:14.583902   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 16/120
	I0311 20:28:15.585234   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 17/120
	I0311 20:28:16.587156   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 18/120
	I0311 20:28:17.588367   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 19/120
	I0311 20:28:18.590427   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 20/120
	I0311 20:28:19.591982   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 21/120
	I0311 20:28:20.593446   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 22/120
	I0311 20:28:21.594973   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 23/120
	I0311 20:28:22.596643   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 24/120
	I0311 20:28:23.598553   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 25/120
	I0311 20:28:24.600049   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 26/120
	I0311 20:28:25.601561   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 27/120
	I0311 20:28:26.603670   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 28/120
	I0311 20:28:27.605954   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 29/120
	I0311 20:28:28.608193   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 30/120
	I0311 20:28:29.609479   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 31/120
	I0311 20:28:30.611565   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 32/120
	I0311 20:28:31.613002   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 33/120
	I0311 20:28:32.614401   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 34/120
	I0311 20:28:33.616192   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 35/120
	I0311 20:28:34.618337   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 36/120
	I0311 20:28:35.620659   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 37/120
	I0311 20:28:36.622006   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 38/120
	I0311 20:28:37.623290   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 39/120
	I0311 20:28:38.625360   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 40/120
	I0311 20:28:39.627470   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 41/120
	I0311 20:28:40.628820   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 42/120
	I0311 20:28:41.630085   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 43/120
	I0311 20:28:42.632411   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 44/120
	I0311 20:28:43.634401   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 45/120
	I0311 20:28:44.635797   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 46/120
	I0311 20:28:45.637357   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 47/120
	I0311 20:28:46.638659   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 48/120
	I0311 20:28:47.639861   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 49/120
	I0311 20:28:48.641806   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 50/120
	I0311 20:28:49.644025   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 51/120
	I0311 20:28:50.645711   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 52/120
	I0311 20:28:51.647380   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 53/120
	I0311 20:28:52.648718   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 54/120
	I0311 20:28:53.650523   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 55/120
	I0311 20:28:54.651931   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 56/120
	I0311 20:28:55.653192   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 57/120
	I0311 20:28:56.655131   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 58/120
	I0311 20:28:57.656360   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 59/120
	I0311 20:28:58.658518   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 60/120
	I0311 20:28:59.660516   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 61/120
	I0311 20:29:00.661777   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 62/120
	I0311 20:29:01.663044   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 63/120
	I0311 20:29:02.664573   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 64/120
	I0311 20:29:03.666416   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 65/120
	I0311 20:29:04.668348   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 66/120
	I0311 20:29:05.670001   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 67/120
	I0311 20:29:06.671812   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 68/120
	I0311 20:29:07.673280   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 69/120
	I0311 20:29:08.674902   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 70/120
	I0311 20:29:09.676099   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 71/120
	I0311 20:29:10.677421   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 72/120
	I0311 20:29:11.679170   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 73/120
	I0311 20:29:12.681205   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 74/120
	I0311 20:29:13.682911   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 75/120
	I0311 20:29:14.684232   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 76/120
	I0311 20:29:15.685499   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 77/120
	I0311 20:29:16.686897   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 78/120
	I0311 20:29:17.688114   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 79/120
	I0311 20:29:18.690107   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 80/120
	I0311 20:29:19.691544   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 81/120
	I0311 20:29:20.692848   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 82/120
	I0311 20:29:21.694054   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 83/120
	I0311 20:29:22.695393   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 84/120
	I0311 20:29:23.697196   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 85/120
	I0311 20:29:24.699223   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 86/120
	I0311 20:29:25.700587   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 87/120
	I0311 20:29:26.701795   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 88/120
	I0311 20:29:27.703053   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 89/120
	I0311 20:29:28.704975   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 90/120
	I0311 20:29:29.707317   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 91/120
	I0311 20:29:30.708601   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 92/120
	I0311 20:29:31.710652   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 93/120
	I0311 20:29:32.712110   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 94/120
	I0311 20:29:33.713422   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 95/120
	I0311 20:29:34.714606   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 96/120
	I0311 20:29:35.716089   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 97/120
	I0311 20:29:36.718064   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 98/120
	I0311 20:29:37.720079   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 99/120
	I0311 20:29:38.721452   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 100/120
	I0311 20:29:39.723295   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 101/120
	I0311 20:29:40.724495   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 102/120
	I0311 20:29:41.725821   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 103/120
	I0311 20:29:42.728179   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 104/120
	I0311 20:29:43.730318   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 105/120
	I0311 20:29:44.731942   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 106/120
	I0311 20:29:45.733986   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 107/120
	I0311 20:29:46.735331   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 108/120
	I0311 20:29:47.736949   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 109/120
	I0311 20:29:48.739075   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 110/120
	I0311 20:29:49.740476   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 111/120
	I0311 20:29:50.741885   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 112/120
	I0311 20:29:51.744376   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 113/120
	I0311 20:29:52.745718   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 114/120
	I0311 20:29:53.747574   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 115/120
	I0311 20:29:54.748865   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 116/120
	I0311 20:29:55.750299   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 117/120
	I0311 20:29:56.751652   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 118/120
	I0311 20:29:57.752958   31261 main.go:141] libmachine: (ha-834040-m02) Waiting for machine to stop 119/120
	I0311 20:29:58.754233   31261 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0311 20:29:58.754349   31261 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-834040 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 3 (19.197443651s)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:29:58.811721   31580 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:29:58.811948   31580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:29:58.811956   31580 out.go:304] Setting ErrFile to fd 2...
	I0311 20:29:58.811961   31580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:29:58.812167   31580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:29:58.812368   31580 out.go:298] Setting JSON to false
	I0311 20:29:58.812396   31580 mustload.go:65] Loading cluster: ha-834040
	I0311 20:29:58.812447   31580 notify.go:220] Checking for updates...
	I0311 20:29:58.812929   31580 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:29:58.812950   31580 status.go:255] checking status of ha-834040 ...
	I0311 20:29:58.813347   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:29:58.813410   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:29:58.828218   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0311 20:29:58.828598   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:29:58.829196   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:29:58.829211   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:29:58.829536   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:29:58.829721   31580 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:29:58.831297   31580 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:29:58.831326   31580 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:29:58.831713   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:29:58.831776   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:29:58.845935   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0311 20:29:58.846319   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:29:58.846733   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:29:58.846756   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:29:58.847105   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:29:58.847287   31580 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:29:58.849838   31580 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:29:58.850178   31580 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:29:58.850203   31580 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:29:58.850340   31580 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:29:58.850711   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:29:58.850760   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:29:58.864767   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0311 20:29:58.865045   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:29:58.865481   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:29:58.865500   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:29:58.865788   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:29:58.865967   31580 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:29:58.866113   31580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:29:58.866140   31580 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:29:58.868698   31580 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:29:58.869112   31580 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:29:58.869138   31580 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:29:58.869252   31580 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:29:58.869402   31580 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:29:58.869526   31580 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:29:58.869659   31580 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:29:58.954965   31580 ssh_runner.go:195] Run: systemctl --version
	I0311 20:29:58.962528   31580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:29:58.981821   31580 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:29:58.981843   31580 api_server.go:166] Checking apiserver status ...
	I0311 20:29:58.981871   31580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:29:58.999043   31580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:29:59.010130   31580 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:29:59.010168   31580 ssh_runner.go:195] Run: ls
	I0311 20:29:59.015112   31580 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:29:59.020066   31580 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:29:59.020086   31580 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:29:59.020099   31580 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:29:59.020122   31580 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:29:59.020411   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:29:59.020456   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:29:59.035257   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46165
	I0311 20:29:59.035654   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:29:59.036158   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:29:59.036179   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:29:59.036507   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:29:59.036720   31580 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:29:59.038236   31580 status.go:330] ha-834040-m02 host status = "Running" (err=<nil>)
	I0311 20:29:59.038249   31580 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:29:59.038521   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:29:59.038553   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:29:59.053555   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0311 20:29:59.053989   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:29:59.054428   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:29:59.054455   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:29:59.054754   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:29:59.054932   31580 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:29:59.057646   31580 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:29:59.058129   31580 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:29:59.058157   31580 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:29:59.058292   31580 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:29:59.058582   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:29:59.058616   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:29:59.072441   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I0311 20:29:59.072814   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:29:59.073222   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:29:59.073252   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:29:59.073623   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:29:59.073783   31580 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:29:59.073955   31580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:29:59.073975   31580 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:29:59.076656   31580 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:29:59.077127   31580 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:29:59.077150   31580 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:29:59.077297   31580 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:29:59.077439   31580 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:29:59.077603   31580 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:29:59.077715   31580 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	W0311 20:30:17.580940   31580 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:17.581032   31580 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0311 20:30:17.581052   31580 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:17.581064   31580 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 20:30:17.581110   31580 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:17.581120   31580 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:30:17.581563   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:17.581616   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:17.596954   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45267
	I0311 20:30:17.597324   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:17.597844   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:30:17.597874   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:17.598185   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:17.598379   31580 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:30:17.599921   31580 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:30:17.599939   31580 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:17.600334   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:17.600377   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:17.614321   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0311 20:30:17.614711   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:17.615131   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:30:17.615151   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:17.615484   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:17.615669   31580 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:30:17.618262   31580 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:17.618768   31580 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:17.618797   31580 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:17.618923   31580 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:17.619178   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:17.619209   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:17.632855   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0311 20:30:17.633205   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:17.633586   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:30:17.633606   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:17.633878   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:17.634050   31580 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:30:17.634215   31580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:17.634236   31580 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:30:17.636904   31580 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:17.637394   31580 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:17.637486   31580 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:17.637657   31580 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:30:17.637818   31580 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:30:17.637938   31580 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:30:17.638123   31580 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:30:17.730068   31580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:17.750167   31580 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:17.750197   31580 api_server.go:166] Checking apiserver status ...
	I0311 20:30:17.750233   31580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:17.767923   31580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:30:17.778443   31580 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:17.778484   31580 ssh_runner.go:195] Run: ls
	I0311 20:30:17.783342   31580 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:17.788145   31580 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:17.788167   31580 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:30:17.788178   31580 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:17.788218   31580 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:30:17.788513   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:17.788553   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:17.803703   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
	I0311 20:30:17.804115   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:17.804554   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:30:17.804574   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:17.804877   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:17.805085   31580 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:30:17.806591   31580 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:30:17.806608   31580 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:17.806871   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:17.806902   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:17.820797   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41995
	I0311 20:30:17.821138   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:17.821551   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:30:17.821569   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:17.821873   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:17.822015   31580 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:30:17.824217   31580 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:17.824559   31580 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:17.824582   31580 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:17.824669   31580 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:17.825012   31580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:17.825052   31580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:17.839357   31580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0311 20:30:17.839753   31580 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:17.840214   31580 main.go:141] libmachine: Using API Version  1
	I0311 20:30:17.840239   31580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:17.840509   31580 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:17.840666   31580 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:30:17.840851   31580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:17.840872   31580 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:30:17.843154   31580 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:17.843563   31580 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:17.843582   31580 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:17.843702   31580 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:30:17.843872   31580 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:30:17.844018   31580 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:30:17.844148   31580 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:30:17.934172   31580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:17.952518   31580 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-834040 -n ha-834040
helpers_test.go:244: <<< TestMutliControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-834040 logs -n 25: (1.5314739s)
helpers_test.go:252: TestMutliControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040:/home/docker/cp-test_ha-834040-m03_ha-834040.txt                       |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040 sudo cat                                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040.txt                                 |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m02:/home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m02 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04:/home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m04 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp testdata/cp-test.txt                                                | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040:/home/docker/cp-test_ha-834040-m04_ha-834040.txt                       |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040 sudo cat                                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040.txt                                 |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m02:/home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m02 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03:/home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m03 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-834040 node stop m02 -v=7                                                     | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:22:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:22:45.357118   27491 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:22:45.357232   27491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:22:45.357242   27491 out.go:304] Setting ErrFile to fd 2...
	I0311 20:22:45.357254   27491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:22:45.357457   27491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:22:45.357980   27491 out.go:298] Setting JSON to false
	I0311 20:22:45.358846   27491 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3914,"bootTime":1710184651,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:22:45.358900   27491 start.go:139] virtualization: kvm guest
	I0311 20:22:45.361360   27491 out.go:177] * [ha-834040] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:22:45.362829   27491 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:22:45.362813   27491 notify.go:220] Checking for updates...
	I0311 20:22:45.364611   27491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:22:45.365924   27491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:22:45.367155   27491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:22:45.368447   27491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:22:45.369687   27491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:22:45.371128   27491 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:22:45.404336   27491 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 20:22:45.405688   27491 start.go:297] selected driver: kvm2
	I0311 20:22:45.405707   27491 start.go:901] validating driver "kvm2" against <nil>
	I0311 20:22:45.405720   27491 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:22:45.406651   27491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:22:45.406715   27491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 20:22:45.420585   27491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 20:22:45.420628   27491 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 20:22:45.420860   27491 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:22:45.420886   27491 cni.go:84] Creating CNI manager for ""
	I0311 20:22:45.420891   27491 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0311 20:22:45.420895   27491 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 20:22:45.420942   27491 start.go:340] cluster config:
	{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0311 20:22:45.421030   27491 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:22:45.422794   27491 out.go:177] * Starting "ha-834040" primary control-plane node in "ha-834040" cluster
	I0311 20:22:45.424002   27491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:22:45.424025   27491 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 20:22:45.424036   27491 cache.go:56] Caching tarball of preloaded images
	I0311 20:22:45.424108   27491 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:22:45.424119   27491 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:22:45.424428   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:22:45.424452   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json: {Name:mk847490f58f22447c66fcb3c2cb95216eb6be6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:22:45.424565   27491 start.go:360] acquireMachinesLock for ha-834040: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:22:45.424591   27491 start.go:364] duration metric: took 14.057µs to acquireMachinesLock for "ha-834040"
	I0311 20:22:45.424606   27491 start.go:93] Provisioning new machine with config: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:22:45.424660   27491 start.go:125] createHost starting for "" (driver="kvm2")
	I0311 20:22:45.426188   27491 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 20:22:45.426292   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:22:45.426326   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:22:45.439379   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0311 20:22:45.439717   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:22:45.440227   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:22:45.440245   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:22:45.440541   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:22:45.440715   27491 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:22:45.440871   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:22:45.440997   27491 start.go:159] libmachine.API.Create for "ha-834040" (driver="kvm2")
	I0311 20:22:45.441016   27491 client.go:168] LocalClient.Create starting
	I0311 20:22:45.441039   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 20:22:45.441070   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:22:45.441088   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:22:45.441134   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 20:22:45.441151   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:22:45.441170   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:22:45.441189   27491 main.go:141] libmachine: Running pre-create checks...
	I0311 20:22:45.441198   27491 main.go:141] libmachine: (ha-834040) Calling .PreCreateCheck
	I0311 20:22:45.441496   27491 main.go:141] libmachine: (ha-834040) Calling .GetConfigRaw
	I0311 20:22:45.441803   27491 main.go:141] libmachine: Creating machine...
	I0311 20:22:45.441814   27491 main.go:141] libmachine: (ha-834040) Calling .Create
	I0311 20:22:45.441906   27491 main.go:141] libmachine: (ha-834040) Creating KVM machine...
	I0311 20:22:45.443025   27491 main.go:141] libmachine: (ha-834040) DBG | found existing default KVM network
	I0311 20:22:45.443636   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.443515   27514 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0311 20:22:45.443652   27491 main.go:141] libmachine: (ha-834040) DBG | created network xml: 
	I0311 20:22:45.443660   27491 main.go:141] libmachine: (ha-834040) DBG | <network>
	I0311 20:22:45.443667   27491 main.go:141] libmachine: (ha-834040) DBG |   <name>mk-ha-834040</name>
	I0311 20:22:45.443678   27491 main.go:141] libmachine: (ha-834040) DBG |   <dns enable='no'/>
	I0311 20:22:45.443689   27491 main.go:141] libmachine: (ha-834040) DBG |   
	I0311 20:22:45.443696   27491 main.go:141] libmachine: (ha-834040) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0311 20:22:45.443704   27491 main.go:141] libmachine: (ha-834040) DBG |     <dhcp>
	I0311 20:22:45.443714   27491 main.go:141] libmachine: (ha-834040) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0311 20:22:45.443729   27491 main.go:141] libmachine: (ha-834040) DBG |     </dhcp>
	I0311 20:22:45.443743   27491 main.go:141] libmachine: (ha-834040) DBG |   </ip>
	I0311 20:22:45.443752   27491 main.go:141] libmachine: (ha-834040) DBG |   
	I0311 20:22:45.443771   27491 main.go:141] libmachine: (ha-834040) DBG | </network>
	I0311 20:22:45.443786   27491 main.go:141] libmachine: (ha-834040) DBG | 
	I0311 20:22:45.448381   27491 main.go:141] libmachine: (ha-834040) DBG | trying to create private KVM network mk-ha-834040 192.168.39.0/24...
	I0311 20:22:45.509320   27491 main.go:141] libmachine: (ha-834040) DBG | private KVM network mk-ha-834040 192.168.39.0/24 created
	I0311 20:22:45.509382   27491 main.go:141] libmachine: (ha-834040) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040 ...
	I0311 20:22:45.509410   27491 main.go:141] libmachine: (ha-834040) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 20:22:45.509430   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.509373   27514 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:22:45.509576   27491 main.go:141] libmachine: (ha-834040) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 20:22:45.732384   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.732249   27514 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa...
	I0311 20:22:45.834319   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.834220   27514 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/ha-834040.rawdisk...
	I0311 20:22:45.834351   27491 main.go:141] libmachine: (ha-834040) DBG | Writing magic tar header
	I0311 20:22:45.834361   27491 main.go:141] libmachine: (ha-834040) DBG | Writing SSH key tar header
	I0311 20:22:45.834375   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.834346   27514 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040 ...
	I0311 20:22:45.834463   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040
	I0311 20:22:45.834496   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040 (perms=drwx------)
	I0311 20:22:45.834508   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 20:22:45.834528   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:22:45.834535   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 20:22:45.834543   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 20:22:45.834550   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins
	I0311 20:22:45.834562   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home
	I0311 20:22:45.834571   27491 main.go:141] libmachine: (ha-834040) DBG | Skipping /home - not owner
	I0311 20:22:45.834586   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 20:22:45.834605   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 20:22:45.834614   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 20:22:45.834623   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 20:22:45.834633   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 20:22:45.834654   27491 main.go:141] libmachine: (ha-834040) Creating domain...
	I0311 20:22:45.835654   27491 main.go:141] libmachine: (ha-834040) define libvirt domain using xml: 
	I0311 20:22:45.835677   27491 main.go:141] libmachine: (ha-834040) <domain type='kvm'>
	I0311 20:22:45.835687   27491 main.go:141] libmachine: (ha-834040)   <name>ha-834040</name>
	I0311 20:22:45.835696   27491 main.go:141] libmachine: (ha-834040)   <memory unit='MiB'>2200</memory>
	I0311 20:22:45.835703   27491 main.go:141] libmachine: (ha-834040)   <vcpu>2</vcpu>
	I0311 20:22:45.835718   27491 main.go:141] libmachine: (ha-834040)   <features>
	I0311 20:22:45.835724   27491 main.go:141] libmachine: (ha-834040)     <acpi/>
	I0311 20:22:45.835728   27491 main.go:141] libmachine: (ha-834040)     <apic/>
	I0311 20:22:45.835733   27491 main.go:141] libmachine: (ha-834040)     <pae/>
	I0311 20:22:45.835741   27491 main.go:141] libmachine: (ha-834040)     
	I0311 20:22:45.835749   27491 main.go:141] libmachine: (ha-834040)   </features>
	I0311 20:22:45.835755   27491 main.go:141] libmachine: (ha-834040)   <cpu mode='host-passthrough'>
	I0311 20:22:45.835760   27491 main.go:141] libmachine: (ha-834040)   
	I0311 20:22:45.835764   27491 main.go:141] libmachine: (ha-834040)   </cpu>
	I0311 20:22:45.835816   27491 main.go:141] libmachine: (ha-834040)   <os>
	I0311 20:22:45.835841   27491 main.go:141] libmachine: (ha-834040)     <type>hvm</type>
	I0311 20:22:45.835848   27491 main.go:141] libmachine: (ha-834040)     <boot dev='cdrom'/>
	I0311 20:22:45.835852   27491 main.go:141] libmachine: (ha-834040)     <boot dev='hd'/>
	I0311 20:22:45.835857   27491 main.go:141] libmachine: (ha-834040)     <bootmenu enable='no'/>
	I0311 20:22:45.835861   27491 main.go:141] libmachine: (ha-834040)   </os>
	I0311 20:22:45.835866   27491 main.go:141] libmachine: (ha-834040)   <devices>
	I0311 20:22:45.835873   27491 main.go:141] libmachine: (ha-834040)     <disk type='file' device='cdrom'>
	I0311 20:22:45.835881   27491 main.go:141] libmachine: (ha-834040)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/boot2docker.iso'/>
	I0311 20:22:45.836290   27491 main.go:141] libmachine: (ha-834040)       <target dev='hdc' bus='scsi'/>
	I0311 20:22:45.836305   27491 main.go:141] libmachine: (ha-834040)       <readonly/>
	I0311 20:22:45.836318   27491 main.go:141] libmachine: (ha-834040)     </disk>
	I0311 20:22:45.836332   27491 main.go:141] libmachine: (ha-834040)     <disk type='file' device='disk'>
	I0311 20:22:45.836340   27491 main.go:141] libmachine: (ha-834040)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 20:22:45.836358   27491 main.go:141] libmachine: (ha-834040)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/ha-834040.rawdisk'/>
	I0311 20:22:45.836365   27491 main.go:141] libmachine: (ha-834040)       <target dev='hda' bus='virtio'/>
	I0311 20:22:45.836379   27491 main.go:141] libmachine: (ha-834040)     </disk>
	I0311 20:22:45.836386   27491 main.go:141] libmachine: (ha-834040)     <interface type='network'>
	I0311 20:22:45.836395   27491 main.go:141] libmachine: (ha-834040)       <source network='mk-ha-834040'/>
	I0311 20:22:45.836407   27491 main.go:141] libmachine: (ha-834040)       <model type='virtio'/>
	I0311 20:22:45.836415   27491 main.go:141] libmachine: (ha-834040)     </interface>
	I0311 20:22:45.836422   27491 main.go:141] libmachine: (ha-834040)     <interface type='network'>
	I0311 20:22:45.836436   27491 main.go:141] libmachine: (ha-834040)       <source network='default'/>
	I0311 20:22:45.836442   27491 main.go:141] libmachine: (ha-834040)       <model type='virtio'/>
	I0311 20:22:45.836455   27491 main.go:141] libmachine: (ha-834040)     </interface>
	I0311 20:22:45.836462   27491 main.go:141] libmachine: (ha-834040)     <serial type='pty'>
	I0311 20:22:45.836472   27491 main.go:141] libmachine: (ha-834040)       <target port='0'/>
	I0311 20:22:45.836478   27491 main.go:141] libmachine: (ha-834040)     </serial>
	I0311 20:22:45.836491   27491 main.go:141] libmachine: (ha-834040)     <console type='pty'>
	I0311 20:22:45.836498   27491 main.go:141] libmachine: (ha-834040)       <target type='serial' port='0'/>
	I0311 20:22:45.836513   27491 main.go:141] libmachine: (ha-834040)     </console>
	I0311 20:22:45.836520   27491 main.go:141] libmachine: (ha-834040)     <rng model='virtio'>
	I0311 20:22:45.836530   27491 main.go:141] libmachine: (ha-834040)       <backend model='random'>/dev/random</backend>
	I0311 20:22:45.836541   27491 main.go:141] libmachine: (ha-834040)     </rng>
	I0311 20:22:45.836549   27491 main.go:141] libmachine: (ha-834040)     
	I0311 20:22:45.836555   27491 main.go:141] libmachine: (ha-834040)     
	I0311 20:22:45.836576   27491 main.go:141] libmachine: (ha-834040)   </devices>
	I0311 20:22:45.836582   27491 main.go:141] libmachine: (ha-834040) </domain>
	I0311 20:22:45.836595   27491 main.go:141] libmachine: (ha-834040) 
	I0311 20:22:45.841126   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:c2:4b:c0 in network default
	I0311 20:22:45.841751   27491 main.go:141] libmachine: (ha-834040) Ensuring networks are active...
	I0311 20:22:45.841775   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:45.842479   27491 main.go:141] libmachine: (ha-834040) Ensuring network default is active
	I0311 20:22:45.842715   27491 main.go:141] libmachine: (ha-834040) Ensuring network mk-ha-834040 is active
	I0311 20:22:45.843152   27491 main.go:141] libmachine: (ha-834040) Getting domain xml...
	I0311 20:22:45.843813   27491 main.go:141] libmachine: (ha-834040) Creating domain...
	I0311 20:22:46.997557   27491 main.go:141] libmachine: (ha-834040) Waiting to get IP...
	I0311 20:22:46.998218   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:46.998632   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:46.998664   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:46.998626   27514 retry.go:31] will retry after 263.902152ms: waiting for machine to come up
	I0311 20:22:47.264098   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:47.264506   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:47.264539   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:47.264486   27514 retry.go:31] will retry after 266.30343ms: waiting for machine to come up
	I0311 20:22:47.531787   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:47.532158   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:47.532188   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:47.532111   27514 retry.go:31] will retry after 476.414298ms: waiting for machine to come up
	I0311 20:22:48.009646   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:48.010063   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:48.010096   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:48.010029   27514 retry.go:31] will retry after 600.032755ms: waiting for machine to come up
	I0311 20:22:48.611700   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:48.612092   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:48.612124   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:48.612052   27514 retry.go:31] will retry after 604.393037ms: waiting for machine to come up
	I0311 20:22:49.217955   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:49.218384   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:49.218407   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:49.218361   27514 retry.go:31] will retry after 886.712129ms: waiting for machine to come up
	I0311 20:22:50.106801   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:50.107120   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:50.107156   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:50.107081   27514 retry.go:31] will retry after 801.265373ms: waiting for machine to come up
	I0311 20:22:50.909467   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:50.909830   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:50.909857   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:50.909772   27514 retry.go:31] will retry after 1.484377047s: waiting for machine to come up
	I0311 20:22:52.396232   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:52.396652   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:52.396680   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:52.396616   27514 retry.go:31] will retry after 1.119763452s: waiting for machine to come up
	I0311 20:22:53.519124   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:53.519538   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:53.519560   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:53.519494   27514 retry.go:31] will retry after 1.725300378s: waiting for machine to come up
	I0311 20:22:55.247275   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:55.247727   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:55.247765   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:55.247697   27514 retry.go:31] will retry after 2.320384618s: waiting for machine to come up
	I0311 20:22:57.569649   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:57.570053   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:57.570076   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:57.570018   27514 retry.go:31] will retry after 2.529001577s: waiting for machine to come up
	I0311 20:23:00.101623   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:00.101988   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:23:00.102008   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:23:00.101952   27514 retry.go:31] will retry after 3.066008911s: waiting for machine to come up
	I0311 20:23:03.169009   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:03.169423   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:23:03.169447   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:23:03.169393   27514 retry.go:31] will retry after 3.89452115s: waiting for machine to come up
	I0311 20:23:07.065892   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.066320   27491 main.go:141] libmachine: (ha-834040) Found IP for machine: 192.168.39.128
	I0311 20:23:07.066349   27491 main.go:141] libmachine: (ha-834040) Reserving static IP address...
	I0311 20:23:07.066365   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has current primary IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.066654   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find host DHCP lease matching {name: "ha-834040", mac: "52:54:00:33:6f:e8", ip: "192.168.39.128"} in network mk-ha-834040
	I0311 20:23:07.133337   27491 main.go:141] libmachine: (ha-834040) DBG | Getting to WaitForSSH function...
	I0311 20:23:07.133368   27491 main.go:141] libmachine: (ha-834040) Reserved static IP address: 192.168.39.128
	I0311 20:23:07.133415   27491 main.go:141] libmachine: (ha-834040) Waiting for SSH to be available...
	I0311 20:23:07.135659   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.135977   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.136006   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.136081   27491 main.go:141] libmachine: (ha-834040) DBG | Using SSH client type: external
	I0311 20:23:07.136103   27491 main.go:141] libmachine: (ha-834040) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa (-rw-------)
	I0311 20:23:07.136153   27491 main.go:141] libmachine: (ha-834040) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:23:07.136165   27491 main.go:141] libmachine: (ha-834040) DBG | About to run SSH command:
	I0311 20:23:07.136194   27491 main.go:141] libmachine: (ha-834040) DBG | exit 0
	I0311 20:23:07.260623   27491 main.go:141] libmachine: (ha-834040) DBG | SSH cmd err, output: <nil>: 
	I0311 20:23:07.260945   27491 main.go:141] libmachine: (ha-834040) KVM machine creation complete!
	I0311 20:23:07.261231   27491 main.go:141] libmachine: (ha-834040) Calling .GetConfigRaw
	I0311 20:23:07.261766   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:07.261936   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:07.262075   27491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 20:23:07.262086   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:23:07.263165   27491 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 20:23:07.263178   27491 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 20:23:07.263186   27491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 20:23:07.263194   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.265722   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.266057   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.266083   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.266222   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.266405   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.266531   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.266638   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.266862   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.267063   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.267075   27491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 20:23:07.368164   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:23:07.368188   27491 main.go:141] libmachine: Detecting the provisioner...
	I0311 20:23:07.368197   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.370723   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.371067   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.371102   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.371281   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.371481   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.371645   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.371800   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.371980   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.372154   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.372168   27491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 20:23:07.478232   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 20:23:07.478289   27491 main.go:141] libmachine: found compatible host: buildroot
	I0311 20:23:07.478299   27491 main.go:141] libmachine: Provisioning with buildroot...
	I0311 20:23:07.478314   27491 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:23:07.478542   27491 buildroot.go:166] provisioning hostname "ha-834040"
	I0311 20:23:07.478567   27491 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:23:07.478744   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.481281   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.481603   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.481631   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.481811   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.481970   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.482121   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.482251   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.482435   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.482624   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.482637   27491 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-834040 && echo "ha-834040" | sudo tee /etc/hostname
	I0311 20:23:07.600305   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040
	
	I0311 20:23:07.600328   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.603722   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.604058   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.604081   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.604260   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.604461   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.604611   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.604726   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.604876   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.605027   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.605049   27491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-834040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-834040/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-834040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:23:07.715195   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:23:07.715219   27491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:23:07.715240   27491 buildroot.go:174] setting up certificates
	I0311 20:23:07.715253   27491 provision.go:84] configureAuth start
	I0311 20:23:07.715277   27491 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:23:07.715561   27491 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:23:07.718036   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.718363   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.718390   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.718555   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.720656   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.721040   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.721071   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.721184   27491 provision.go:143] copyHostCerts
	I0311 20:23:07.721222   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:23:07.721280   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:23:07.721292   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:23:07.721364   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:23:07.721476   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:23:07.721501   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:23:07.721508   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:23:07.721551   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:23:07.721613   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:23:07.721640   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:23:07.721649   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:23:07.721683   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:23:07.721756   27491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.ha-834040 san=[127.0.0.1 192.168.39.128 ha-834040 localhost minikube]
	I0311 20:23:07.773153   27491 provision.go:177] copyRemoteCerts
	I0311 20:23:07.773206   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:23:07.773225   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.775507   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.775849   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.775897   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.776025   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.776204   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.776368   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.776500   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:07.862194   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:23:07.862272   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0311 20:23:07.890626   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:23:07.890683   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 20:23:07.918911   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:23:07.918960   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:23:07.945269   27491 provision.go:87] duration metric: took 229.999498ms to configureAuth
	I0311 20:23:07.945291   27491 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:23:07.945489   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:23:07.945567   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.947915   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.948195   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.948220   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.948405   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.948589   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.948757   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.948916   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.949081   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.949268   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.949284   27491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:23:08.215386   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:23:08.215412   27491 main.go:141] libmachine: Checking connection to Docker...
	I0311 20:23:08.215428   27491 main.go:141] libmachine: (ha-834040) Calling .GetURL
	I0311 20:23:08.216647   27491 main.go:141] libmachine: (ha-834040) DBG | Using libvirt version 6000000
	I0311 20:23:08.218575   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.218828   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.218861   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.219034   27491 main.go:141] libmachine: Docker is up and running!
	I0311 20:23:08.219053   27491 main.go:141] libmachine: Reticulating splines...
	I0311 20:23:08.219061   27491 client.go:171] duration metric: took 22.778035881s to LocalClient.Create
	I0311 20:23:08.219090   27491 start.go:167] duration metric: took 22.778089023s to libmachine.API.Create "ha-834040"
	I0311 20:23:08.219100   27491 start.go:293] postStartSetup for "ha-834040" (driver="kvm2")
	I0311 20:23:08.219112   27491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:23:08.219132   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.219341   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:23:08.219366   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:08.221263   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.221541   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.221572   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.221672   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:08.221840   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.221973   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:08.222090   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:08.305829   27491 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:23:08.310759   27491 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:23:08.310776   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:23:08.310837   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:23:08.310926   27491 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:23:08.310942   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:23:08.311051   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:23:08.323084   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:23:08.348503   27491 start.go:296] duration metric: took 129.392519ms for postStartSetup
	I0311 20:23:08.348536   27491 main.go:141] libmachine: (ha-834040) Calling .GetConfigRaw
	I0311 20:23:08.349153   27491 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:23:08.351581   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.351957   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.351986   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.352150   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:23:08.352309   27491 start.go:128] duration metric: took 22.927641429s to createHost
	I0311 20:23:08.352328   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:08.354293   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.354584   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.354613   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.354728   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:08.354899   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.355061   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.355221   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:08.355352   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:08.355518   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:08.355536   27491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:23:08.457684   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710188588.426853069
	
	I0311 20:23:08.457711   27491 fix.go:216] guest clock: 1710188588.426853069
	I0311 20:23:08.457721   27491 fix.go:229] Guest: 2024-03-11 20:23:08.426853069 +0000 UTC Remote: 2024-03-11 20:23:08.352319386 +0000 UTC m=+23.041906755 (delta=74.533683ms)
	I0311 20:23:08.457770   27491 fix.go:200] guest clock delta is within tolerance: 74.533683ms
	I0311 20:23:08.457777   27491 start.go:83] releasing machines lock for "ha-834040", held for 23.033177693s
	I0311 20:23:08.457798   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.458057   27491 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:23:08.460298   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.460603   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.460634   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.460782   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.461257   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.461420   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.461498   27491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:23:08.461535   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:08.461635   27491 ssh_runner.go:195] Run: cat /version.json
	I0311 20:23:08.461659   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:08.463986   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.464171   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.464264   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.464286   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.464440   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:08.464474   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.464499   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.464617   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.464633   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:08.464834   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:08.464838   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.464996   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:08.465000   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:08.465128   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:08.551817   27491 ssh_runner.go:195] Run: systemctl --version
	I0311 20:23:08.575684   27491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:23:08.737257   27491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:23:08.744645   27491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:23:08.744701   27491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:23:08.762282   27491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 20:23:08.762305   27491 start.go:494] detecting cgroup driver to use...
	I0311 20:23:08.762368   27491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:23:08.778367   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:23:08.792310   27491 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:23:08.792354   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:23:08.806314   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:23:08.821443   27491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:23:08.941704   27491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:23:09.081216   27491 docker.go:233] disabling docker service ...
	I0311 20:23:09.081287   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:23:09.097332   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:23:09.111565   27491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:23:09.250642   27491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:23:09.390462   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:23:09.405358   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:23:09.425278   27491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:23:09.425342   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:23:09.435796   27491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:23:09.435846   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:23:09.446390   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:23:09.456826   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:23:09.467528   27491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:23:09.479724   27491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:23:09.490634   27491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 20:23:09.490676   27491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 20:23:09.503618   27491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:23:09.513243   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:23:09.652368   27491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:23:09.792800   27491 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:23:09.792862   27491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:23:09.798501   27491 start.go:562] Will wait 60s for crictl version
	I0311 20:23:09.798548   27491 ssh_runner.go:195] Run: which crictl
	I0311 20:23:09.802566   27491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:23:09.841419   27491 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:23:09.841489   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:23:09.870470   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:23:09.901524   27491 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:23:09.902831   27491 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:23:09.905562   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:09.905872   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:09.905897   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:09.906097   27491 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:23:09.910532   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:23:09.923983   27491 kubeadm.go:877] updating cluster {Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 20:23:09.924069   27491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:23:09.924102   27491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:23:09.971391   27491 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 20:23:09.971453   27491 ssh_runner.go:195] Run: which lz4
	I0311 20:23:09.975521   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0311 20:23:09.975594   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 20:23:09.979798   27491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 20:23:09.979814   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 20:23:11.857810   27491 crio.go:444] duration metric: took 1.882233993s to copy over tarball
	I0311 20:23:11.857873   27491 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 20:23:14.429503   27491 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.571601285s)
	I0311 20:23:14.429530   27491 crio.go:451] duration metric: took 2.571697352s to extract the tarball
	I0311 20:23:14.429537   27491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 20:23:14.473160   27491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:23:14.527587   27491 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:23:14.527607   27491 cache_images.go:84] Images are preloaded, skipping loading
	I0311 20:23:14.527613   27491 kubeadm.go:928] updating node { 192.168.39.128 8443 v1.28.4 crio true true} ...
	I0311 20:23:14.527690   27491 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-834040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:23:14.527746   27491 ssh_runner.go:195] Run: crio config
	I0311 20:23:14.580410   27491 cni.go:84] Creating CNI manager for ""
	I0311 20:23:14.580431   27491 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0311 20:23:14.580444   27491 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 20:23:14.580462   27491 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-834040 NodeName:ha-834040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 20:23:14.580578   27491 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-834040"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 20:23:14.580598   27491 kube-vip.go:101] generating kube-vip config ...
	I0311 20:23:14.580664   27491 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 20:23:14.580707   27491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:23:14.592943   27491 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 20:23:14.592995   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0311 20:23:14.603866   27491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0311 20:23:14.622284   27491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:23:14.640597   27491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0311 20:23:14.658813   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 20:23:14.676840   27491 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0311 20:23:14.681059   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:23:14.695113   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:23:14.832190   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:23:14.851793   27491 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040 for IP: 192.168.39.128
	I0311 20:23:14.851878   27491 certs.go:194] generating shared ca certs ...
	I0311 20:23:14.851908   27491 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:14.852110   27491 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:23:14.852168   27491 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:23:14.852184   27491 certs.go:256] generating profile certs ...
	I0311 20:23:14.852245   27491 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key
	I0311 20:23:14.852266   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt with IP's: []
	I0311 20:23:14.985304   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt ...
	I0311 20:23:14.985334   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt: {Name:mk8d6d8309a1ad51304337920d227e7e5d9c0124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:14.985496   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key ...
	I0311 20:23:14.985509   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key: {Name:mk1304b5cb243ef01eb7fb761ac1e689580d776a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:14.985618   27491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.1488f95d
	I0311 20:23:14.985643   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.1488f95d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.254]
	I0311 20:23:15.178969   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.1488f95d ...
	I0311 20:23:15.179002   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.1488f95d: {Name:mk2407342d56deacb6e6a805a37e5e10b19062ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:15.179152   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.1488f95d ...
	I0311 20:23:15.179167   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.1488f95d: {Name:mkab517bae76f3fb8b939eae49568621f4bafaeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:15.179240   27491 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.1488f95d -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt
	I0311 20:23:15.179321   27491 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.1488f95d -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key
	I0311 20:23:15.179373   27491 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key
	I0311 20:23:15.179387   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt with IP's: []
	I0311 20:23:15.408046   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt ...
	I0311 20:23:15.408074   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt: {Name:mk3374ab63685e2a88ec78dcc274dc3977a541a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:15.408228   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key ...
	I0311 20:23:15.408247   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key: {Name:mk6c5436dc6ec77daf6bbc0f26adfe9debb5c3ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:15.408339   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:23:15.408360   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:23:15.408375   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:23:15.408391   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:23:15.408409   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:23:15.408428   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:23:15.408454   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:23:15.408474   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:23:15.408542   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:23:15.408586   27491 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:23:15.408603   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:23:15.408652   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:23:15.408685   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:23:15.408718   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:23:15.408789   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:23:15.408833   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:23:15.408853   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:23:15.408871   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:23:15.409491   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:23:15.438274   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:23:15.465235   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:23:15.491202   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:23:15.516538   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 20:23:15.545154   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 20:23:15.572541   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:23:15.598771   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:23:15.626643   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:23:15.652447   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:23:15.682788   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:23:15.718846   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 20:23:15.746899   27491 ssh_runner.go:195] Run: openssl version
	I0311 20:23:15.753300   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:23:15.766289   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:23:15.771507   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:23:15.771556   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:23:15.777866   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:23:15.790374   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:23:15.802710   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:23:15.807724   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:23:15.807769   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:23:15.813844   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:23:15.826184   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:23:15.839114   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:23:15.844127   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:23:15.844161   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:23:15.850235   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:23:15.862455   27491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:23:15.867062   27491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 20:23:15.867120   27491 kubeadm.go:391] StartCluster: {Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:23:15.867192   27491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 20:23:15.867247   27491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 20:23:15.910993   27491 cri.go:89] found id: ""
	I0311 20:23:15.911045   27491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 20:23:15.923549   27491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 20:23:15.938228   27491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 20:23:15.949031   27491 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 20:23:15.949048   27491 kubeadm.go:156] found existing configuration files:
	
	I0311 20:23:15.949091   27491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 20:23:15.959625   27491 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 20:23:15.959666   27491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 20:23:15.971165   27491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 20:23:15.981995   27491 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 20:23:15.982054   27491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 20:23:15.993169   27491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 20:23:16.003702   27491 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 20:23:16.003748   27491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 20:23:16.014895   27491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 20:23:16.025503   27491 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 20:23:16.025541   27491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 20:23:16.036481   27491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 20:23:16.291292   27491 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 20:23:27.584467   27491 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 20:23:27.584546   27491 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 20:23:27.584633   27491 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 20:23:27.584816   27491 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 20:23:27.584932   27491 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 20:23:27.584993   27491 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 20:23:27.586626   27491 out.go:204]   - Generating certificates and keys ...
	I0311 20:23:27.586715   27491 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 20:23:27.586779   27491 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 20:23:27.586853   27491 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 20:23:27.586919   27491 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 20:23:27.586989   27491 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 20:23:27.587037   27491 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 20:23:27.587168   27491 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 20:23:27.587329   27491 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-834040 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0311 20:23:27.587410   27491 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 20:23:27.587517   27491 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-834040 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0311 20:23:27.587594   27491 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 20:23:27.587692   27491 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 20:23:27.587769   27491 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 20:23:27.587858   27491 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 20:23:27.587961   27491 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 20:23:27.588036   27491 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 20:23:27.588140   27491 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 20:23:27.588225   27491 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 20:23:27.588346   27491 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 20:23:27.588497   27491 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 20:23:27.591045   27491 out.go:204]   - Booting up control plane ...
	I0311 20:23:27.591154   27491 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 20:23:27.591246   27491 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 20:23:27.591332   27491 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 20:23:27.591485   27491 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 20:23:27.591635   27491 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 20:23:27.591696   27491 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 20:23:27.591903   27491 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 20:23:27.592028   27491 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.611827 seconds
	I0311 20:23:27.592157   27491 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 20:23:27.592315   27491 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 20:23:27.592401   27491 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 20:23:27.592657   27491 kubeadm.go:309] [mark-control-plane] Marking the node ha-834040 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 20:23:27.592748   27491 kubeadm.go:309] [bootstrap-token] Using token: 74fjk6.c6d8spiuhr71ss8c
	I0311 20:23:27.594066   27491 out.go:204]   - Configuring RBAC rules ...
	I0311 20:23:27.594169   27491 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 20:23:27.594266   27491 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 20:23:27.594383   27491 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 20:23:27.594543   27491 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 20:23:27.594684   27491 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 20:23:27.594765   27491 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 20:23:27.594873   27491 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 20:23:27.594941   27491 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 20:23:27.595007   27491 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 20:23:27.595019   27491 kubeadm.go:309] 
	I0311 20:23:27.595082   27491 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 20:23:27.595102   27491 kubeadm.go:309] 
	I0311 20:23:27.595201   27491 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 20:23:27.595212   27491 kubeadm.go:309] 
	I0311 20:23:27.595254   27491 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 20:23:27.595304   27491 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 20:23:27.595351   27491 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 20:23:27.595357   27491 kubeadm.go:309] 
	I0311 20:23:27.595399   27491 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 20:23:27.595405   27491 kubeadm.go:309] 
	I0311 20:23:27.595462   27491 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 20:23:27.595476   27491 kubeadm.go:309] 
	I0311 20:23:27.595548   27491 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 20:23:27.595646   27491 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 20:23:27.595732   27491 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 20:23:27.595741   27491 kubeadm.go:309] 
	I0311 20:23:27.595832   27491 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 20:23:27.595921   27491 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 20:23:27.595935   27491 kubeadm.go:309] 
	I0311 20:23:27.596034   27491 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 74fjk6.c6d8spiuhr71ss8c \
	I0311 20:23:27.596129   27491 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 20:23:27.596167   27491 kubeadm.go:309] 	--control-plane 
	I0311 20:23:27.596176   27491 kubeadm.go:309] 
	I0311 20:23:27.596251   27491 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 20:23:27.596259   27491 kubeadm.go:309] 
	I0311 20:23:27.596364   27491 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 74fjk6.c6d8spiuhr71ss8c \
	I0311 20:23:27.596535   27491 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 20:23:27.596549   27491 cni.go:84] Creating CNI manager for ""
	I0311 20:23:27.596556   27491 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0311 20:23:27.598282   27491 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0311 20:23:27.599668   27491 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0311 20:23:27.613258   27491 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0311 20:23:27.613276   27491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0311 20:23:27.680863   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0311 20:23:28.782990   27491 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.102076024s)
	I0311 20:23:28.783029   27491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 20:23:28.783129   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:28.783136   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-834040 minikube.k8s.io/updated_at=2024_03_11T20_23_28_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=ha-834040 minikube.k8s.io/primary=true
	I0311 20:23:28.797118   27491 ops.go:34] apiserver oom_adj: -16
	I0311 20:23:28.936340   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:29.436475   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:29.937326   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:30.437245   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:30.937039   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:31.436433   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:31.936475   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:32.436913   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:32.936838   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:33.436996   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:33.937346   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:34.437157   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:34.936644   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:35.436749   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:35.936994   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:36.437427   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:36.937124   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:37.436981   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:37.936722   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:38.436722   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:38.936659   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:39.437313   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:39.554352   27491 kubeadm.go:1106] duration metric: took 10.771297587s to wait for elevateKubeSystemPrivileges
	W0311 20:23:39.554386   27491 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 20:23:39.554396   27491 kubeadm.go:393] duration metric: took 23.687290613s to StartCluster
	I0311 20:23:39.554417   27491 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:39.554505   27491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:23:39.555129   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:39.555362   27491 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:23:39.555385   27491 start.go:240] waiting for startup goroutines ...
	I0311 20:23:39.555370   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 20:23:39.555393   27491 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 20:23:39.555475   27491 addons.go:69] Setting storage-provisioner=true in profile "ha-834040"
	I0311 20:23:39.555483   27491 addons.go:69] Setting default-storageclass=true in profile "ha-834040"
	I0311 20:23:39.555513   27491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-834040"
	I0311 20:23:39.555534   27491 addons.go:234] Setting addon storage-provisioner=true in "ha-834040"
	I0311 20:23:39.555570   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:23:39.555599   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:23:39.555958   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.555965   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.555987   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.555992   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.570754   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I0311 20:23:39.570870   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44611
	I0311 20:23:39.571149   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.571274   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.571660   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.571677   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.571784   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.571802   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.572016   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.572097   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.572225   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:23:39.572632   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.572662   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.574400   27491 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:23:39.574753   27491 kapi.go:59] client config for ha-834040: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt", KeyFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key", CAFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 20:23:39.575281   27491 cert_rotation.go:137] Starting client certificate rotation controller
	I0311 20:23:39.575487   27491 addons.go:234] Setting addon default-storageclass=true in "ha-834040"
	I0311 20:23:39.575530   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:23:39.575906   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.575945   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.588200   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35557
	I0311 20:23:39.588706   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.589189   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.589212   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.589535   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.589606   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0311 20:23:39.589684   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:23:39.589917   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.590358   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.590381   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.590712   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.591324   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.591348   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.591542   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:39.593976   27491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 20:23:39.595301   27491 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 20:23:39.595319   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 20:23:39.595336   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:39.598640   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:39.599081   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:39.599102   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:39.599298   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:39.599488   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:39.599741   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:39.599890   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:39.607515   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40621
	I0311 20:23:39.607899   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.608364   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.608390   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.608690   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.608909   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:23:39.610421   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:39.610693   27491 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 20:23:39.610711   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 20:23:39.610729   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:39.613295   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:39.613655   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:39.613686   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:39.613784   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:39.613948   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:39.614075   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:39.614210   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:39.694915   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 20:23:39.805571   27491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 20:23:39.812998   27491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 20:23:40.457692   27491 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0311 20:23:40.505920   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.505946   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.506204   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.506221   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.506222   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.506229   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.506237   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.506444   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.506463   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.506476   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.506600   27491 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0311 20:23:40.506613   27491 round_trippers.go:469] Request Headers:
	I0311 20:23:40.506623   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:23:40.506634   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:23:40.517230   27491 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0311 20:23:40.517954   27491 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0311 20:23:40.517973   27491 round_trippers.go:469] Request Headers:
	I0311 20:23:40.517984   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:23:40.517990   27491 round_trippers.go:473]     Content-Type: application/json
	I0311 20:23:40.517995   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:23:40.520666   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:23:40.520938   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.520954   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.521199   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.521239   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.521253   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.765874   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.765915   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.766216   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.766236   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.766253   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.766265   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.766270   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.766565   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.766597   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.766637   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.768431   27491 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0311 20:23:40.769546   27491 addons.go:505] duration metric: took 1.214154488s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0311 20:23:40.769588   27491 start.go:245] waiting for cluster config update ...
	I0311 20:23:40.769603   27491 start.go:254] writing updated cluster config ...
	I0311 20:23:40.771148   27491 out.go:177] 
	I0311 20:23:40.772436   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:23:40.772500   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:23:40.774142   27491 out.go:177] * Starting "ha-834040-m02" control-plane node in "ha-834040" cluster
	I0311 20:23:40.775808   27491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:23:40.775830   27491 cache.go:56] Caching tarball of preloaded images
	I0311 20:23:40.775924   27491 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:23:40.775937   27491 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:23:40.776026   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:23:40.776355   27491 start.go:360] acquireMachinesLock for ha-834040-m02: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:23:40.776405   27491 start.go:364] duration metric: took 29.972µs to acquireMachinesLock for "ha-834040-m02"
	I0311 20:23:40.776429   27491 start.go:93] Provisioning new machine with config: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:23:40.776499   27491 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0311 20:23:40.778149   27491 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 20:23:40.778231   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:40.778259   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:40.792485   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I0311 20:23:40.792879   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:40.793313   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:40.793334   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:40.793623   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:40.793831   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetMachineName
	I0311 20:23:40.793986   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:23:40.794127   27491 start.go:159] libmachine.API.Create for "ha-834040" (driver="kvm2")
	I0311 20:23:40.794165   27491 client.go:168] LocalClient.Create starting
	I0311 20:23:40.794208   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 20:23:40.794257   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:23:40.794271   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:23:40.794326   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 20:23:40.794344   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:23:40.794354   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:23:40.794369   27491 main.go:141] libmachine: Running pre-create checks...
	I0311 20:23:40.794376   27491 main.go:141] libmachine: (ha-834040-m02) Calling .PreCreateCheck
	I0311 20:23:40.794530   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetConfigRaw
	I0311 20:23:40.794857   27491 main.go:141] libmachine: Creating machine...
	I0311 20:23:40.794869   27491 main.go:141] libmachine: (ha-834040-m02) Calling .Create
	I0311 20:23:40.794986   27491 main.go:141] libmachine: (ha-834040-m02) Creating KVM machine...
	I0311 20:23:40.796130   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found existing default KVM network
	I0311 20:23:40.796227   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found existing private KVM network mk-ha-834040
	I0311 20:23:40.796357   27491 main.go:141] libmachine: (ha-834040-m02) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02 ...
	I0311 20:23:40.796383   27491 main.go:141] libmachine: (ha-834040-m02) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 20:23:40.796447   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:40.796348   27831 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:23:40.796564   27491 main.go:141] libmachine: (ha-834040-m02) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 20:23:41.004705   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:41.004590   27831 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa...
	I0311 20:23:41.167886   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:41.167788   27831 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/ha-834040-m02.rawdisk...
	I0311 20:23:41.167935   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Writing magic tar header
	I0311 20:23:41.167948   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Writing SSH key tar header
	I0311 20:23:41.168800   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:41.168642   27831 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02 ...
	I0311 20:23:41.169426   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02
	I0311 20:23:41.169447   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 20:23:41.169460   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02 (perms=drwx------)
	I0311 20:23:41.169472   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 20:23:41.169483   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 20:23:41.169499   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 20:23:41.169512   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 20:23:41.169525   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:23:41.169541   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 20:23:41.169559   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 20:23:41.169572   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins
	I0311 20:23:41.169580   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home
	I0311 20:23:41.169592   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 20:23:41.169606   27491 main.go:141] libmachine: (ha-834040-m02) Creating domain...
	I0311 20:23:41.169626   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Skipping /home - not owner
	I0311 20:23:41.170361   27491 main.go:141] libmachine: (ha-834040-m02) define libvirt domain using xml: 
	I0311 20:23:41.170382   27491 main.go:141] libmachine: (ha-834040-m02) <domain type='kvm'>
	I0311 20:23:41.170393   27491 main.go:141] libmachine: (ha-834040-m02)   <name>ha-834040-m02</name>
	I0311 20:23:41.170401   27491 main.go:141] libmachine: (ha-834040-m02)   <memory unit='MiB'>2200</memory>
	I0311 20:23:41.170410   27491 main.go:141] libmachine: (ha-834040-m02)   <vcpu>2</vcpu>
	I0311 20:23:41.170424   27491 main.go:141] libmachine: (ha-834040-m02)   <features>
	I0311 20:23:41.170437   27491 main.go:141] libmachine: (ha-834040-m02)     <acpi/>
	I0311 20:23:41.170444   27491 main.go:141] libmachine: (ha-834040-m02)     <apic/>
	I0311 20:23:41.170451   27491 main.go:141] libmachine: (ha-834040-m02)     <pae/>
	I0311 20:23:41.170456   27491 main.go:141] libmachine: (ha-834040-m02)     
	I0311 20:23:41.170462   27491 main.go:141] libmachine: (ha-834040-m02)   </features>
	I0311 20:23:41.170469   27491 main.go:141] libmachine: (ha-834040-m02)   <cpu mode='host-passthrough'>
	I0311 20:23:41.170475   27491 main.go:141] libmachine: (ha-834040-m02)   
	I0311 20:23:41.170481   27491 main.go:141] libmachine: (ha-834040-m02)   </cpu>
	I0311 20:23:41.170487   27491 main.go:141] libmachine: (ha-834040-m02)   <os>
	I0311 20:23:41.170492   27491 main.go:141] libmachine: (ha-834040-m02)     <type>hvm</type>
	I0311 20:23:41.170517   27491 main.go:141] libmachine: (ha-834040-m02)     <boot dev='cdrom'/>
	I0311 20:23:41.170540   27491 main.go:141] libmachine: (ha-834040-m02)     <boot dev='hd'/>
	I0311 20:23:41.170550   27491 main.go:141] libmachine: (ha-834040-m02)     <bootmenu enable='no'/>
	I0311 20:23:41.170560   27491 main.go:141] libmachine: (ha-834040-m02)   </os>
	I0311 20:23:41.170569   27491 main.go:141] libmachine: (ha-834040-m02)   <devices>
	I0311 20:23:41.170580   27491 main.go:141] libmachine: (ha-834040-m02)     <disk type='file' device='cdrom'>
	I0311 20:23:41.170591   27491 main.go:141] libmachine: (ha-834040-m02)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/boot2docker.iso'/>
	I0311 20:23:41.170598   27491 main.go:141] libmachine: (ha-834040-m02)       <target dev='hdc' bus='scsi'/>
	I0311 20:23:41.170603   27491 main.go:141] libmachine: (ha-834040-m02)       <readonly/>
	I0311 20:23:41.170613   27491 main.go:141] libmachine: (ha-834040-m02)     </disk>
	I0311 20:23:41.170627   27491 main.go:141] libmachine: (ha-834040-m02)     <disk type='file' device='disk'>
	I0311 20:23:41.170647   27491 main.go:141] libmachine: (ha-834040-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 20:23:41.170678   27491 main.go:141] libmachine: (ha-834040-m02)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/ha-834040-m02.rawdisk'/>
	I0311 20:23:41.170690   27491 main.go:141] libmachine: (ha-834040-m02)       <target dev='hda' bus='virtio'/>
	I0311 20:23:41.170702   27491 main.go:141] libmachine: (ha-834040-m02)     </disk>
	I0311 20:23:41.170714   27491 main.go:141] libmachine: (ha-834040-m02)     <interface type='network'>
	I0311 20:23:41.170725   27491 main.go:141] libmachine: (ha-834040-m02)       <source network='mk-ha-834040'/>
	I0311 20:23:41.170732   27491 main.go:141] libmachine: (ha-834040-m02)       <model type='virtio'/>
	I0311 20:23:41.170739   27491 main.go:141] libmachine: (ha-834040-m02)     </interface>
	I0311 20:23:41.170756   27491 main.go:141] libmachine: (ha-834040-m02)     <interface type='network'>
	I0311 20:23:41.170769   27491 main.go:141] libmachine: (ha-834040-m02)       <source network='default'/>
	I0311 20:23:41.170780   27491 main.go:141] libmachine: (ha-834040-m02)       <model type='virtio'/>
	I0311 20:23:41.170791   27491 main.go:141] libmachine: (ha-834040-m02)     </interface>
	I0311 20:23:41.170804   27491 main.go:141] libmachine: (ha-834040-m02)     <serial type='pty'>
	I0311 20:23:41.170830   27491 main.go:141] libmachine: (ha-834040-m02)       <target port='0'/>
	I0311 20:23:41.170852   27491 main.go:141] libmachine: (ha-834040-m02)     </serial>
	I0311 20:23:41.170867   27491 main.go:141] libmachine: (ha-834040-m02)     <console type='pty'>
	I0311 20:23:41.170880   27491 main.go:141] libmachine: (ha-834040-m02)       <target type='serial' port='0'/>
	I0311 20:23:41.170893   27491 main.go:141] libmachine: (ha-834040-m02)     </console>
	I0311 20:23:41.170904   27491 main.go:141] libmachine: (ha-834040-m02)     <rng model='virtio'>
	I0311 20:23:41.170916   27491 main.go:141] libmachine: (ha-834040-m02)       <backend model='random'>/dev/random</backend>
	I0311 20:23:41.170931   27491 main.go:141] libmachine: (ha-834040-m02)     </rng>
	I0311 20:23:41.170943   27491 main.go:141] libmachine: (ha-834040-m02)     
	I0311 20:23:41.170950   27491 main.go:141] libmachine: (ha-834040-m02)     
	I0311 20:23:41.170974   27491 main.go:141] libmachine: (ha-834040-m02)   </devices>
	I0311 20:23:41.170984   27491 main.go:141] libmachine: (ha-834040-m02) </domain>
	I0311 20:23:41.171018   27491 main.go:141] libmachine: (ha-834040-m02) 
	I0311 20:23:41.177811   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:58:dc:7f in network default
	I0311 20:23:41.178336   27491 main.go:141] libmachine: (ha-834040-m02) Ensuring networks are active...
	I0311 20:23:41.178354   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:41.178969   27491 main.go:141] libmachine: (ha-834040-m02) Ensuring network default is active
	I0311 20:23:41.179227   27491 main.go:141] libmachine: (ha-834040-m02) Ensuring network mk-ha-834040 is active
	I0311 20:23:41.179558   27491 main.go:141] libmachine: (ha-834040-m02) Getting domain xml...
	I0311 20:23:41.180207   27491 main.go:141] libmachine: (ha-834040-m02) Creating domain...
	I0311 20:23:42.419030   27491 main.go:141] libmachine: (ha-834040-m02) Waiting to get IP...
	I0311 20:23:42.419932   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:42.420385   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:42.420413   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:42.420368   27831 retry.go:31] will retry after 217.532188ms: waiting for machine to come up
	I0311 20:23:42.639741   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:42.640457   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:42.640500   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:42.640372   27831 retry.go:31] will retry after 333.50749ms: waiting for machine to come up
	I0311 20:23:42.976015   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:42.976464   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:42.976498   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:42.976413   27831 retry.go:31] will retry after 394.228373ms: waiting for machine to come up
	I0311 20:23:43.372441   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:43.372843   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:43.372899   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:43.372814   27831 retry.go:31] will retry after 486.843036ms: waiting for machine to come up
	I0311 20:23:43.861414   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:43.861827   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:43.861854   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:43.861782   27831 retry.go:31] will retry after 613.031869ms: waiting for machine to come up
	I0311 20:23:44.476018   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:44.476408   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:44.476436   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:44.476359   27831 retry.go:31] will retry after 651.873525ms: waiting for machine to come up
	I0311 20:23:45.130232   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:45.130649   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:45.130672   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:45.130601   27831 retry.go:31] will retry after 1.171639293s: waiting for machine to come up
	I0311 20:23:46.303731   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:46.304221   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:46.304283   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:46.304202   27831 retry.go:31] will retry after 1.432679492s: waiting for machine to come up
	I0311 20:23:47.738705   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:47.739138   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:47.739164   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:47.739097   27831 retry.go:31] will retry after 1.483296056s: waiting for machine to come up
	I0311 20:23:49.224835   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:49.225279   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:49.225309   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:49.225215   27831 retry.go:31] will retry after 1.659262357s: waiting for machine to come up
	I0311 20:23:50.886341   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:50.886726   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:50.886753   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:50.886665   27831 retry.go:31] will retry after 2.704023891s: waiting for machine to come up
	I0311 20:23:53.593500   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:53.593958   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:53.593981   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:53.593899   27831 retry.go:31] will retry after 3.13007858s: waiting for machine to come up
	I0311 20:23:56.725318   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:56.725702   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:56.725730   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:56.725658   27831 retry.go:31] will retry after 3.149880361s: waiting for machine to come up
	I0311 20:23:59.877708   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:59.878066   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:59.878103   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:59.878035   27831 retry.go:31] will retry after 3.423556103s: waiting for machine to come up
	I0311 20:24:03.304140   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:03.304598   27491 main.go:141] libmachine: (ha-834040-m02) Found IP for machine: 192.168.39.101
	I0311 20:24:03.304628   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has current primary IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:03.304641   27491 main.go:141] libmachine: (ha-834040-m02) Reserving static IP address...
	I0311 20:24:03.305014   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find host DHCP lease matching {name: "ha-834040-m02", mac: "52:54:00:82:4e:e5", ip: "192.168.39.101"} in network mk-ha-834040
	I0311 20:24:03.374130   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Getting to WaitForSSH function...
	I0311 20:24:03.374159   27491 main.go:141] libmachine: (ha-834040-m02) Reserved static IP address: 192.168.39.101
	I0311 20:24:03.374173   27491 main.go:141] libmachine: (ha-834040-m02) Waiting for SSH to be available...
	I0311 20:24:03.376636   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:03.377035   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040
	I0311 20:24:03.377063   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find defined IP address of network mk-ha-834040 interface with MAC address 52:54:00:82:4e:e5
	I0311 20:24:03.377200   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using SSH client type: external
	I0311 20:24:03.377226   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa (-rw-------)
	I0311 20:24:03.377274   27491 main.go:141] libmachine: (ha-834040-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:24:03.377295   27491 main.go:141] libmachine: (ha-834040-m02) DBG | About to run SSH command:
	I0311 20:24:03.377313   27491 main.go:141] libmachine: (ha-834040-m02) DBG | exit 0
	I0311 20:24:03.380589   27491 main.go:141] libmachine: (ha-834040-m02) DBG | SSH cmd err, output: exit status 255: 
	I0311 20:24:03.380611   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0311 20:24:03.380620   27491 main.go:141] libmachine: (ha-834040-m02) DBG | command : exit 0
	I0311 20:24:03.380628   27491 main.go:141] libmachine: (ha-834040-m02) DBG | err     : exit status 255
	I0311 20:24:03.380639   27491 main.go:141] libmachine: (ha-834040-m02) DBG | output  : 
	I0311 20:24:06.380794   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Getting to WaitForSSH function...
	I0311 20:24:06.383297   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.383746   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.383779   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.383905   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using SSH client type: external
	I0311 20:24:06.383933   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa (-rw-------)
	I0311 20:24:06.383971   27491 main.go:141] libmachine: (ha-834040-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:24:06.383986   27491 main.go:141] libmachine: (ha-834040-m02) DBG | About to run SSH command:
	I0311 20:24:06.384001   27491 main.go:141] libmachine: (ha-834040-m02) DBG | exit 0
	I0311 20:24:06.513079   27491 main.go:141] libmachine: (ha-834040-m02) DBG | SSH cmd err, output: <nil>: 
	I0311 20:24:06.513322   27491 main.go:141] libmachine: (ha-834040-m02) KVM machine creation complete!
	I0311 20:24:06.513618   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetConfigRaw
	I0311 20:24:06.514112   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:06.514295   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:06.514454   27491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 20:24:06.514472   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:24:06.515648   27491 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 20:24:06.515662   27491 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 20:24:06.515670   27491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 20:24:06.515688   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.517702   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.518022   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.518046   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.518173   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:06.518319   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.518466   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.518590   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:06.518760   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:06.518949   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:06.518970   27491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 20:24:06.620010   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:24:06.620036   27491 main.go:141] libmachine: Detecting the provisioner...
	I0311 20:24:06.620043   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.622606   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.622909   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.622923   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.623126   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:06.623323   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.623481   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.623627   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:06.623786   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:06.623952   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:06.623962   27491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 20:24:06.729728   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 20:24:06.729793   27491 main.go:141] libmachine: found compatible host: buildroot
	I0311 20:24:06.729802   27491 main.go:141] libmachine: Provisioning with buildroot...
	I0311 20:24:06.729809   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetMachineName
	I0311 20:24:06.729997   27491 buildroot.go:166] provisioning hostname "ha-834040-m02"
	I0311 20:24:06.730024   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetMachineName
	I0311 20:24:06.730223   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.732708   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.733081   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.733108   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.733237   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:06.733439   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.733607   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.733765   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:06.733912   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:06.734117   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:06.734141   27491 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-834040-m02 && echo "ha-834040-m02" | sudo tee /etc/hostname
	I0311 20:24:06.852819   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040-m02
	
	I0311 20:24:06.852848   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.855276   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.855581   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.855610   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.855733   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:06.855923   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.856126   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.856276   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:06.856421   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:06.856595   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:06.856617   27491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-834040-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-834040-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-834040-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:24:06.974343   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:24:06.974366   27491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:24:06.974392   27491 buildroot.go:174] setting up certificates
	I0311 20:24:06.974403   27491 provision.go:84] configureAuth start
	I0311 20:24:06.974415   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetMachineName
	I0311 20:24:06.974661   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:24:06.976862   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.977166   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.977193   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.977289   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.979104   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.979416   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.979436   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.979546   27491 provision.go:143] copyHostCerts
	I0311 20:24:06.979573   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:24:06.979599   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:24:06.979608   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:24:06.979668   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:24:06.979730   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:24:06.979748   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:24:06.979754   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:24:06.979776   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:24:06.979814   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:24:06.979831   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:24:06.979834   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:24:06.979855   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:24:06.979896   27491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.ha-834040-m02 san=[127.0.0.1 192.168.39.101 ha-834040-m02 localhost minikube]
	I0311 20:24:07.106447   27491 provision.go:177] copyRemoteCerts
	I0311 20:24:07.106502   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:24:07.106523   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.108974   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.109246   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.109281   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.109405   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.109577   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.109734   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.109896   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:24:07.191972   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:24:07.192039   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:24:07.221996   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:24:07.222049   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 20:24:07.251245   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:24:07.251301   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 20:24:07.281208   27491 provision.go:87] duration metric: took 306.794898ms to configureAuth
	I0311 20:24:07.281232   27491 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:24:07.281395   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:24:07.281485   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.283952   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.284307   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.284335   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.284465   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.284642   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.284845   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.285023   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.285227   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:07.285436   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:07.285459   27491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:24:07.587453   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:24:07.587482   27491 main.go:141] libmachine: Checking connection to Docker...
	I0311 20:24:07.587489   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetURL
	I0311 20:24:07.588905   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using libvirt version 6000000
	I0311 20:24:07.590987   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.591311   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.591341   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.591489   27491 main.go:141] libmachine: Docker is up and running!
	I0311 20:24:07.591503   27491 main.go:141] libmachine: Reticulating splines...
	I0311 20:24:07.591509   27491 client.go:171] duration metric: took 26.797329558s to LocalClient.Create
	I0311 20:24:07.591527   27491 start.go:167] duration metric: took 26.797403966s to libmachine.API.Create "ha-834040"
	I0311 20:24:07.591536   27491 start.go:293] postStartSetup for "ha-834040-m02" (driver="kvm2")
	I0311 20:24:07.591545   27491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:24:07.591568   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.591788   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:24:07.591815   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.593777   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.594109   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.594136   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.594241   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.594411   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.594558   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.594681   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:24:07.676592   27491 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:24:07.681314   27491 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:24:07.681335   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:24:07.681401   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:24:07.681489   27491 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:24:07.681500   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:24:07.681597   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:24:07.692193   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:24:07.717649   27491 start.go:296] duration metric: took 126.100619ms for postStartSetup
	I0311 20:24:07.717720   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetConfigRaw
	I0311 20:24:07.718239   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:24:07.720677   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.721071   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.721102   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.721270   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:24:07.721428   27491 start.go:128] duration metric: took 26.944919506s to createHost
	I0311 20:24:07.721447   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.723303   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.723569   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.723598   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.723721   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.723920   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.724073   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.724183   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.724333   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:07.724482   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:07.724492   27491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:24:07.830939   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710188647.805297545
	
	I0311 20:24:07.830964   27491 fix.go:216] guest clock: 1710188647.805297545
	I0311 20:24:07.830975   27491 fix.go:229] Guest: 2024-03-11 20:24:07.805297545 +0000 UTC Remote: 2024-03-11 20:24:07.721438169 +0000 UTC m=+82.411025538 (delta=83.859376ms)
	I0311 20:24:07.830998   27491 fix.go:200] guest clock delta is within tolerance: 83.859376ms
	I0311 20:24:07.831010   27491 start.go:83] releasing machines lock for "ha-834040-m02", held for 27.054592054s
	I0311 20:24:07.831037   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.831292   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:24:07.833986   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.834320   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.834348   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.836593   27491 out.go:177] * Found network options:
	I0311 20:24:07.837897   27491 out.go:177]   - NO_PROXY=192.168.39.128
	W0311 20:24:07.839082   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 20:24:07.839106   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.839584   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.839758   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.839848   27491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:24:07.839884   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	W0311 20:24:07.839927   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 20:24:07.839993   27491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:24:07.840015   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.842346   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.842683   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.842709   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.842727   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.842853   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.843023   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.843164   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.843188   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.843215   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.843314   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:24:07.843330   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.843473   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.843609   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.843756   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:24:08.078887   27491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:24:08.085642   27491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:24:08.085706   27491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:24:08.102794   27491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 20:24:08.102821   27491 start.go:494] detecting cgroup driver to use...
	I0311 20:24:08.102876   27491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:24:08.119750   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:24:08.134083   27491 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:24:08.134122   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:24:08.148007   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:24:08.161964   27491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:24:08.286584   27491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:24:08.464092   27491 docker.go:233] disabling docker service ...
	I0311 20:24:08.464189   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:24:08.480143   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:24:08.493994   27491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:24:08.619937   27491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:24:08.741948   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:24:08.760924   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:24:08.784235   27491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:24:08.784287   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:24:08.795915   27491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:24:08.795961   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:24:08.807425   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:24:08.819277   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:24:08.830931   27491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:24:08.842775   27491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:24:08.853248   27491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 20:24:08.853293   27491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 20:24:08.868526   27491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:24:08.879401   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:24:08.996543   27491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:24:09.139370   27491 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:24:09.139462   27491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:24:09.144664   27491 start.go:562] Will wait 60s for crictl version
	I0311 20:24:09.144714   27491 ssh_runner.go:195] Run: which crictl
	I0311 20:24:09.149034   27491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:24:09.185654   27491 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:24:09.185731   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:24:09.215883   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:24:09.247430   27491 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:24:09.248714   27491 out.go:177]   - env NO_PROXY=192.168.39.128
	I0311 20:24:09.249991   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:24:09.252590   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:09.252997   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:09.253022   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:09.253192   27491 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:24:09.257888   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:24:09.271482   27491 mustload.go:65] Loading cluster: ha-834040
	I0311 20:24:09.271645   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:24:09.271876   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:24:09.271915   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:24:09.286805   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0311 20:24:09.287155   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:24:09.287645   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:24:09.287672   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:24:09.287951   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:24:09.288128   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:24:09.289622   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:24:09.289892   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:24:09.289922   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:24:09.303513   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0311 20:24:09.303833   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:24:09.304235   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:24:09.304257   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:24:09.304587   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:24:09.304763   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:24:09.304908   27491 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040 for IP: 192.168.39.101
	I0311 20:24:09.304920   27491 certs.go:194] generating shared ca certs ...
	I0311 20:24:09.304938   27491 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:24:09.305043   27491 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:24:09.305081   27491 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:24:09.305090   27491 certs.go:256] generating profile certs ...
	I0311 20:24:09.305155   27491 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key
	I0311 20:24:09.305175   27491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.2645eb02
	I0311 20:24:09.305188   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.2645eb02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.101 192.168.39.254]
	I0311 20:24:09.446752   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.2645eb02 ...
	I0311 20:24:09.446779   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.2645eb02: {Name:mk1103d1562a60daa1f3efd4d01a6beca972a730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:24:09.446934   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.2645eb02 ...
	I0311 20:24:09.446945   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.2645eb02: {Name:mk32d0fe2fab477620d0edc7e12451103a7a72fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:24:09.447011   27491 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.2645eb02 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt
	I0311 20:24:09.447132   27491 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.2645eb02 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key
	I0311 20:24:09.447249   27491 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key
	I0311 20:24:09.447264   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:24:09.447276   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:24:09.447288   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:24:09.447303   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:24:09.447315   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:24:09.447327   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:24:09.447339   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:24:09.447350   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:24:09.447391   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:24:09.447418   27491 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:24:09.447428   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:24:09.447449   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:24:09.447475   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:24:09.447495   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:24:09.447531   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:24:09.447556   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:24:09.447569   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:24:09.447581   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:24:09.447607   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:24:09.450220   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:24:09.450652   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:24:09.450673   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:24:09.450855   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:24:09.451043   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:24:09.451215   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:24:09.451375   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:24:09.525043   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0311 20:24:09.531003   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0311 20:24:09.544207   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0311 20:24:09.549189   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0311 20:24:09.570445   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0311 20:24:09.576168   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0311 20:24:09.593681   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0311 20:24:09.598663   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0311 20:24:09.612987   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0311 20:24:09.617778   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0311 20:24:09.630741   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0311 20:24:09.635763   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0311 20:24:09.647679   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:24:09.678481   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:24:09.704624   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:24:09.730316   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:24:09.755545   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0311 20:24:09.781200   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 20:24:09.808784   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:24:09.836481   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:24:09.863455   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:24:09.889122   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:24:09.914811   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:24:09.940210   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0311 20:24:09.957419   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0311 20:24:09.975644   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0311 20:24:09.993250   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0311 20:24:10.011124   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0311 20:24:10.029276   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0311 20:24:10.047396   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0311 20:24:10.066361   27491 ssh_runner.go:195] Run: openssl version
	I0311 20:24:10.072094   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:24:10.082958   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:24:10.087473   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:24:10.087509   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:24:10.093233   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:24:10.103994   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:24:10.114513   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:24:10.119081   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:24:10.119135   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:24:10.125133   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:24:10.136484   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:24:10.147342   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:24:10.152013   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:24:10.152054   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:24:10.157909   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:24:10.168551   27491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:24:10.172884   27491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 20:24:10.172928   27491 kubeadm.go:928] updating node {m02 192.168.39.101 8443 v1.28.4 crio true true} ...
	I0311 20:24:10.173005   27491 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-834040-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:24:10.173034   27491 kube-vip.go:101] generating kube-vip config ...
	I0311 20:24:10.173065   27491 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 20:24:10.173101   27491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:24:10.182341   27491 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0311 20:24:10.182376   27491 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0311 20:24:10.191773   27491 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0311 20:24:10.191803   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0311 20:24:10.191866   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0311 20:24:10.191896   27491 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0311 20:24:10.191912   27491 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0311 20:24:10.197721   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0311 20:24:10.197742   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0311 20:24:11.331282   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:24:11.346321   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0311 20:24:11.346398   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0311 20:24:11.350883   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0311 20:24:11.350906   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0311 20:24:14.211514   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0311 20:24:14.211607   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0311 20:24:14.217392   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0311 20:24:14.217428   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0311 20:24:14.491181   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0311 20:24:14.503700   27491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0311 20:24:14.524204   27491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:24:14.543304   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 20:24:14.561587   27491 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0311 20:24:14.567737   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:24:14.582271   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:24:14.719542   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:24:14.738631   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:24:14.738940   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:24:14.738982   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:24:14.753758   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I0311 20:24:14.754152   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:24:14.754595   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:24:14.754615   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:24:14.755009   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:24:14.755218   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:24:14.755377   27491 start.go:316] joinCluster: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:24:14.755460   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0311 20:24:14.755474   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:24:14.758430   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:24:14.758848   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:24:14.758878   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:24:14.759017   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:24:14.759176   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:24:14.759323   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:24:14.759477   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:24:14.933008   27491 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:24:14.933048   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c16koq.5cz3h51ea7m9fsz3 --discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-834040-m02 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I0311 20:24:56.000347   27491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c16koq.5cz3h51ea7m9fsz3 --discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-834040-m02 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (41.067273974s)
	I0311 20:24:56.000374   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0311 20:24:56.434187   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-834040-m02 minikube.k8s.io/updated_at=2024_03_11T20_24_56_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=ha-834040 minikube.k8s.io/primary=false
	I0311 20:24:56.614310   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-834040-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0311 20:24:56.741847   27491 start.go:318] duration metric: took 41.986464707s to joinCluster
	I0311 20:24:56.741918   27491 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:24:56.743311   27491 out.go:177] * Verifying Kubernetes components...
	I0311 20:24:56.742229   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:24:56.744599   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:24:57.054229   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:24:57.095064   27491 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:24:57.095365   27491 kapi.go:59] client config for ha-834040: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt", KeyFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key", CAFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0311 20:24:57.095447   27491 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.128:8443
	I0311 20:24:57.095729   27491 node_ready.go:35] waiting up to 6m0s for node "ha-834040-m02" to be "Ready" ...
	I0311 20:24:57.095856   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:57.095869   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:57.095880   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:57.095886   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:57.108596   27491 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0311 20:24:57.596732   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:57.596768   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:57.596779   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:57.596784   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:57.603205   27491 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 20:24:58.096019   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:58.096043   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:58.096055   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:58.096061   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:58.100634   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:24:58.596289   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:58.596314   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:58.596324   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:58.596329   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:58.599790   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:24:59.095922   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:59.095943   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:59.095950   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:59.095956   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:59.099193   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:24:59.099924   27491 node_ready.go:53] node "ha-834040-m02" has status "Ready":"False"
	I0311 20:24:59.596329   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:59.596354   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:59.596367   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:59.596372   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:59.600230   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:00.096684   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:00.096706   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:00.096714   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:00.096717   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:00.100756   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:00.596482   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:00.596512   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:00.596520   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:00.596532   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:00.600066   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:01.096615   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:01.096639   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:01.096651   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:01.096656   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:01.101070   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:01.101733   27491 node_ready.go:53] node "ha-834040-m02" has status "Ready":"False"
	I0311 20:25:01.596010   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:01.596030   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:01.596038   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:01.596042   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:01.599178   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.096182   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:02.096202   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.096210   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.096214   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.100382   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:02.101034   27491 node_ready.go:49] node "ha-834040-m02" has status "Ready":"True"
	I0311 20:25:02.101054   27491 node_ready.go:38] duration metric: took 5.005300284s for node "ha-834040-m02" to be "Ready" ...
	I0311 20:25:02.101065   27491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:25:02.101155   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:02.101166   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.101176   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.101181   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.105954   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:02.113925   27491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.114005   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d6f2x
	I0311 20:25:02.114016   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.114026   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.114032   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.117033   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.117526   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.117539   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.117549   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.117556   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.120449   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.121030   27491 pod_ready.go:92] pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.121047   27491 pod_ready.go:81] duration metric: took 7.103461ms for pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.121055   27491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.121100   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kq47h
	I0311 20:25:02.121108   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.121114   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.121120   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.123680   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.124425   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.124437   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.124444   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.124447   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.127504   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.128051   27491 pod_ready.go:92] pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.128066   27491 pod_ready.go:81] duration metric: took 7.00259ms for pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.128074   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.128159   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040
	I0311 20:25:02.128168   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.128174   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.128180   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.130803   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.132071   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.132084   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.132093   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.132098   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.134883   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.135662   27491 pod_ready.go:92] pod "etcd-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.135676   27491 pod_ready.go:81] duration metric: took 7.594242ms for pod "etcd-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.135683   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.135726   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040-m02
	I0311 20:25:02.135737   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.135746   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.135756   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.138263   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.138832   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:02.138849   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.138859   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.138864   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.141780   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.142811   27491 pod_ready.go:92] pod "etcd-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.142827   27491 pod_ready.go:81] duration metric: took 7.138293ms for pod "etcd-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.142838   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.297182   27491 request.go:629] Waited for 154.299948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040
	I0311 20:25:02.297239   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040
	I0311 20:25:02.297245   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.297255   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.297262   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.301120   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.497222   27491 request.go:629] Waited for 195.354737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.497268   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.497273   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.497280   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.497285   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.500580   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.501086   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.501104   27491 pod_ready.go:81] duration metric: took 358.258018ms for pod "kube-apiserver-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.501127   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.697194   27491 request.go:629] Waited for 195.986873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:02.697261   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:02.697270   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.697277   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.697281   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.700555   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.896728   27491 request.go:629] Waited for 195.42862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:02.896824   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:02.896833   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.896843   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.896852   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.900841   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:03.096916   27491 request.go:629] Waited for 95.254865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:03.096997   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:03.097008   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:03.097019   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:03.097027   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:03.100889   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:03.297149   27491 request.go:629] Waited for 195.361273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:03.297198   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:03.297203   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:03.297213   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:03.297219   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:03.301383   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:03.502146   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:03.502166   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:03.502174   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:03.502178   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:03.507151   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:03.696144   27491 request.go:629] Waited for 188.292165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:03.696222   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:03.696227   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:03.696235   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:03.696238   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:03.699439   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:04.002211   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:04.002232   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:04.002240   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:04.002244   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:04.007636   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:25:04.096473   27491 request.go:629] Waited for 88.234909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:04.096516   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:04.096521   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:04.096529   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:04.096540   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:04.099659   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:04.501599   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:04.501625   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:04.501646   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:04.501652   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:04.505133   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:04.505970   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:04.505985   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:04.505993   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:04.505999   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:04.508683   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:04.509590   27491 pod_ready.go:102] pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace has status "Ready":"False"
	I0311 20:25:05.001429   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:05.001456   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:05.001468   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:05.001476   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:05.005521   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:05.006316   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:05.006330   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:05.006342   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:05.006346   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:05.009333   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:05.502069   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:05.502086   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:05.502094   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:05.502097   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:05.505566   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:05.506322   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:05.506337   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:05.506344   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:05.506347   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:05.509177   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:06.001481   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:06.001501   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:06.001508   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:06.001512   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:06.005575   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:06.006743   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:06.006755   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:06.006762   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:06.006767   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:06.010301   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:06.501382   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:06.501404   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:06.501411   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:06.501416   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:06.506322   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:06.507053   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:06.507069   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:06.507076   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:06.507080   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:06.510691   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:06.511103   27491 pod_ready.go:102] pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace has status "Ready":"False"
	I0311 20:25:07.001462   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:07.001481   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.001489   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.001492   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.007649   27491 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 20:25:07.008261   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.008276   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.008284   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.008287   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.015581   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:25:07.016178   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:07.016195   27491 pod_ready.go:81] duration metric: took 4.515042161s for pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.016204   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.016257   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040
	I0311 20:25:07.016265   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.016272   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.016277   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.020024   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:07.020704   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:07.020716   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.020723   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.020727   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.024370   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:07.024860   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:07.024875   27491 pod_ready.go:81] duration metric: took 8.665178ms for pod "kube-controller-manager-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.024883   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.097186   27491 request.go:629] Waited for 72.263494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m02
	I0311 20:25:07.097259   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m02
	I0311 20:25:07.097268   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.097279   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.097292   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.102859   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:25:07.296221   27491 request.go:629] Waited for 192.271341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.296274   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.296288   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.296313   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.296320   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.300428   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:07.301029   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:07.301047   27491 pod_ready.go:81] duration metric: took 276.158386ms for pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.301056   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dsjx4" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.496442   27491 request.go:629] Waited for 195.329737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsjx4
	I0311 20:25:07.496500   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsjx4
	I0311 20:25:07.496505   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.496513   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.496518   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.500079   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:07.697138   27491 request.go:629] Waited for 196.195898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.697214   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.697227   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.697237   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.697246   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.702892   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:25:07.704091   27491 pod_ready.go:92] pod "kube-proxy-dsjx4" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:07.704113   27491 pod_ready.go:81] duration metric: took 403.050172ms for pod "kube-proxy-dsjx4" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.704127   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8svv" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.897040   27491 request.go:629] Waited for 192.804717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h8svv
	I0311 20:25:07.897099   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h8svv
	I0311 20:25:07.897107   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.897121   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.897131   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.901240   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:08.096501   27491 request.go:629] Waited for 194.354639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:08.096563   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:08.096571   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.096578   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.096590   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.100062   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:08.100938   27491 pod_ready.go:92] pod "kube-proxy-h8svv" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:08.100959   27491 pod_ready.go:81] duration metric: took 396.822704ms for pod "kube-proxy-h8svv" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.100976   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.296993   27491 request.go:629] Waited for 195.933071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040
	I0311 20:25:08.297047   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040
	I0311 20:25:08.297052   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.297058   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.297063   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.300456   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:08.496512   27491 request.go:629] Waited for 195.342547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:08.496582   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:08.496593   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.496603   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.496610   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.499946   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:08.500743   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:08.500757   27491 pod_ready.go:81] duration metric: took 399.770972ms for pod "kube-scheduler-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.500766   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.696950   27491 request.go:629] Waited for 196.133275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m02
	I0311 20:25:08.697046   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m02
	I0311 20:25:08.697055   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.697062   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.697067   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.701016   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:08.897099   27491 request.go:629] Waited for 195.338584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:08.897157   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:08.897164   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.897176   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.897186   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.901688   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:08.902660   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:08.902678   27491 pod_ready.go:81] duration metric: took 401.905871ms for pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.902691   27491 pod_ready.go:38] duration metric: took 6.801589621s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:25:08.902712   27491 api_server.go:52] waiting for apiserver process to appear ...
	I0311 20:25:08.902774   27491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:25:08.921063   27491 api_server.go:72] duration metric: took 12.17911372s to wait for apiserver process to appear ...
	I0311 20:25:08.921085   27491 api_server.go:88] waiting for apiserver healthz status ...
	I0311 20:25:08.921103   27491 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0311 20:25:08.925702   27491 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0311 20:25:08.925785   27491 round_trippers.go:463] GET https://192.168.39.128:8443/version
	I0311 20:25:08.925797   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.925806   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.925816   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.926901   27491 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0311 20:25:08.927075   27491 api_server.go:141] control plane version: v1.28.4
	I0311 20:25:08.927093   27491 api_server.go:131] duration metric: took 6.003215ms to wait for apiserver health ...
	I0311 20:25:08.927100   27491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 20:25:09.096448   27491 request.go:629] Waited for 169.296838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:09.096511   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:09.096516   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:09.096523   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:09.096526   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:09.101542   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:09.106192   27491 system_pods.go:59] 17 kube-system pods found
	I0311 20:25:09.106227   27491 system_pods.go:61] "coredns-5dd5756b68-d6f2x" [ddc7bef4-f6c5-442f-8149-e52a1822986d] Running
	I0311 20:25:09.106233   27491 system_pods.go:61] "coredns-5dd5756b68-kq47h" [f2a70553-206f-4d11-b32f-01ddd30db8ec] Running
	I0311 20:25:09.106236   27491 system_pods.go:61] "etcd-ha-834040" [76aef9d7-e8f7-4675-92db-614a3723f8b0] Running
	I0311 20:25:09.106239   27491 system_pods.go:61] "etcd-ha-834040-m02" [c87b59c2-5dcd-4217-9d64-1eab2ecf0075] Running
	I0311 20:25:09.106243   27491 system_pods.go:61] "kindnet-bw656" [edb13135-e5b5-46df-922e-5ebfb444c219] Running
	I0311 20:25:09.106247   27491 system_pods.go:61] "kindnet-rqcq6" [7c368ac4-0fa3-4185-98a7-40df481939ee] Running
	I0311 20:25:09.106259   27491 system_pods.go:61] "kube-apiserver-ha-834040" [f1a21652-f5f0-4ff4-a181-9719fbb72320] Running
	I0311 20:25:09.106264   27491 system_pods.go:61] "kube-apiserver-ha-834040-m02" [eaadd58d-4c00-4dd8-94fe-2d28bed895f5] Running
	I0311 20:25:09.106269   27491 system_pods.go:61] "kube-controller-manager-ha-834040" [48fff24f-f490-4cad-ae02-67dd35208820] Running
	I0311 20:25:09.106274   27491 system_pods.go:61] "kube-controller-manager-ha-834040-m02" [a3418676-a178-4f18-accd-cbc835234b6f] Running
	I0311 20:25:09.106279   27491 system_pods.go:61] "kube-proxy-dsjx4" [b8dccd4a-d900-4c56-8861-4c19dbda4a31] Running
	I0311 20:25:09.106286   27491 system_pods.go:61] "kube-proxy-h8svv" [3a7973ca-9a35-4190-8845-cc685619b093] Running
	I0311 20:25:09.106291   27491 system_pods.go:61] "kube-scheduler-ha-834040" [665bbcfc-d34c-46f7-8c3c-73380466fb35] Running
	I0311 20:25:09.106296   27491 system_pods.go:61] "kube-scheduler-ha-834040-m02" [3429847c-a119-4dba-bcfc-f41e6bd8b351] Running
	I0311 20:25:09.106300   27491 system_pods.go:61] "kube-vip-ha-834040" [d539e386-31f6-4b7c-9e36-8a413b82a4a8] Running
	I0311 20:25:09.106304   27491 system_pods.go:61] "kube-vip-ha-834040-m02" [59d64aa5-94ab-44d5-a42e-5453eb2c0b37] Running
	I0311 20:25:09.106307   27491 system_pods.go:61] "storage-provisioner" [bbc64228-86a0-4e0c-9eef-f4644439ca13] Running
	I0311 20:25:09.106312   27491 system_pods.go:74] duration metric: took 179.207071ms to wait for pod list to return data ...
	I0311 20:25:09.106320   27491 default_sa.go:34] waiting for default service account to be created ...
	I0311 20:25:09.296703   27491 request.go:629] Waited for 190.328936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0311 20:25:09.296767   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0311 20:25:09.296773   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:09.296780   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:09.296784   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:09.300442   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:09.300682   27491 default_sa.go:45] found service account: "default"
	I0311 20:25:09.300698   27491 default_sa.go:55] duration metric: took 194.373229ms for default service account to be created ...
	I0311 20:25:09.300706   27491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 20:25:09.496803   27491 request.go:629] Waited for 196.035335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:09.496882   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:09.496889   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:09.496897   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:09.496906   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:09.502227   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:25:09.507523   27491 system_pods.go:86] 17 kube-system pods found
	I0311 20:25:09.507544   27491 system_pods.go:89] "coredns-5dd5756b68-d6f2x" [ddc7bef4-f6c5-442f-8149-e52a1822986d] Running
	I0311 20:25:09.507550   27491 system_pods.go:89] "coredns-5dd5756b68-kq47h" [f2a70553-206f-4d11-b32f-01ddd30db8ec] Running
	I0311 20:25:09.507556   27491 system_pods.go:89] "etcd-ha-834040" [76aef9d7-e8f7-4675-92db-614a3723f8b0] Running
	I0311 20:25:09.507566   27491 system_pods.go:89] "etcd-ha-834040-m02" [c87b59c2-5dcd-4217-9d64-1eab2ecf0075] Running
	I0311 20:25:09.507576   27491 system_pods.go:89] "kindnet-bw656" [edb13135-e5b5-46df-922e-5ebfb444c219] Running
	I0311 20:25:09.507584   27491 system_pods.go:89] "kindnet-rqcq6" [7c368ac4-0fa3-4185-98a7-40df481939ee] Running
	I0311 20:25:09.507594   27491 system_pods.go:89] "kube-apiserver-ha-834040" [f1a21652-f5f0-4ff4-a181-9719fbb72320] Running
	I0311 20:25:09.507603   27491 system_pods.go:89] "kube-apiserver-ha-834040-m02" [eaadd58d-4c00-4dd8-94fe-2d28bed895f5] Running
	I0311 20:25:09.507609   27491 system_pods.go:89] "kube-controller-manager-ha-834040" [48fff24f-f490-4cad-ae02-67dd35208820] Running
	I0311 20:25:09.507618   27491 system_pods.go:89] "kube-controller-manager-ha-834040-m02" [a3418676-a178-4f18-accd-cbc835234b6f] Running
	I0311 20:25:09.507625   27491 system_pods.go:89] "kube-proxy-dsjx4" [b8dccd4a-d900-4c56-8861-4c19dbda4a31] Running
	I0311 20:25:09.507635   27491 system_pods.go:89] "kube-proxy-h8svv" [3a7973ca-9a35-4190-8845-cc685619b093] Running
	I0311 20:25:09.507643   27491 system_pods.go:89] "kube-scheduler-ha-834040" [665bbcfc-d34c-46f7-8c3c-73380466fb35] Running
	I0311 20:25:09.507652   27491 system_pods.go:89] "kube-scheduler-ha-834040-m02" [3429847c-a119-4dba-bcfc-f41e6bd8b351] Running
	I0311 20:25:09.507661   27491 system_pods.go:89] "kube-vip-ha-834040" [d539e386-31f6-4b7c-9e36-8a413b82a4a8] Running
	I0311 20:25:09.507667   27491 system_pods.go:89] "kube-vip-ha-834040-m02" [59d64aa5-94ab-44d5-a42e-5453eb2c0b37] Running
	I0311 20:25:09.507675   27491 system_pods.go:89] "storage-provisioner" [bbc64228-86a0-4e0c-9eef-f4644439ca13] Running
	I0311 20:25:09.507688   27491 system_pods.go:126] duration metric: took 206.972856ms to wait for k8s-apps to be running ...
	I0311 20:25:09.507701   27491 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 20:25:09.507747   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:25:09.523611   27491 system_svc.go:56] duration metric: took 15.904633ms WaitForService to wait for kubelet
	I0311 20:25:09.523637   27491 kubeadm.go:576] duration metric: took 12.781689138s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:25:09.523657   27491 node_conditions.go:102] verifying NodePressure condition ...
	I0311 20:25:09.697062   27491 request.go:629] Waited for 173.340368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes
	I0311 20:25:09.697126   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes
	I0311 20:25:09.697131   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:09.697139   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:09.697147   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:09.700420   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:09.702981   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:25:09.703003   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:25:09.703012   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:25:09.703016   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:25:09.703020   27491 node_conditions.go:105] duration metric: took 179.357298ms to run NodePressure ...
	I0311 20:25:09.703029   27491 start.go:240] waiting for startup goroutines ...
	I0311 20:25:09.703052   27491 start.go:254] writing updated cluster config ...
	I0311 20:25:09.705371   27491 out.go:177] 
	I0311 20:25:09.706745   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:25:09.706832   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:25:09.708578   27491 out.go:177] * Starting "ha-834040-m03" control-plane node in "ha-834040" cluster
	I0311 20:25:09.710147   27491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:25:09.710167   27491 cache.go:56] Caching tarball of preloaded images
	I0311 20:25:09.710272   27491 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:25:09.710295   27491 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:25:09.710404   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:25:09.710663   27491 start.go:360] acquireMachinesLock for ha-834040-m03: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:25:09.710721   27491 start.go:364] duration metric: took 29.271µs to acquireMachinesLock for "ha-834040-m03"
	I0311 20:25:09.710746   27491 start.go:93] Provisioning new machine with config: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:25:09.710873   27491 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0311 20:25:09.712644   27491 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 20:25:09.712725   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:25:09.712785   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:25:09.729319   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0311 20:25:09.729650   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:25:09.730134   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:25:09.730163   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:25:09.730525   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:25:09.730708   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetMachineName
	I0311 20:25:09.730891   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:09.731027   27491 start.go:159] libmachine.API.Create for "ha-834040" (driver="kvm2")
	I0311 20:25:09.731063   27491 client.go:168] LocalClient.Create starting
	I0311 20:25:09.731090   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 20:25:09.731119   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:25:09.731134   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:25:09.731182   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 20:25:09.731200   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:25:09.731212   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:25:09.731228   27491 main.go:141] libmachine: Running pre-create checks...
	I0311 20:25:09.731236   27491 main.go:141] libmachine: (ha-834040-m03) Calling .PreCreateCheck
	I0311 20:25:09.731356   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetConfigRaw
	I0311 20:25:09.731729   27491 main.go:141] libmachine: Creating machine...
	I0311 20:25:09.731742   27491 main.go:141] libmachine: (ha-834040-m03) Calling .Create
	I0311 20:25:09.731850   27491 main.go:141] libmachine: (ha-834040-m03) Creating KVM machine...
	I0311 20:25:09.733124   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found existing default KVM network
	I0311 20:25:09.733298   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found existing private KVM network mk-ha-834040
	I0311 20:25:09.733443   27491 main.go:141] libmachine: (ha-834040-m03) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03 ...
	I0311 20:25:09.733468   27491 main.go:141] libmachine: (ha-834040-m03) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 20:25:09.733518   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:09.733421   28175 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:25:09.733577   27491 main.go:141] libmachine: (ha-834040-m03) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 20:25:09.954288   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:09.954184   28175 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa...
	I0311 20:25:10.124677   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:10.124565   28175 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/ha-834040-m03.rawdisk...
	I0311 20:25:10.124705   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Writing magic tar header
	I0311 20:25:10.124715   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Writing SSH key tar header
	I0311 20:25:10.124726   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:10.124666   28175 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03 ...
	I0311 20:25:10.124821   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03
	I0311 20:25:10.124844   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 20:25:10.124857   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03 (perms=drwx------)
	I0311 20:25:10.124871   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 20:25:10.124883   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 20:25:10.124898   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:25:10.124918   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 20:25:10.124932   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 20:25:10.124947   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 20:25:10.124960   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 20:25:10.124970   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins
	I0311 20:25:10.124986   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home
	I0311 20:25:10.124999   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Skipping /home - not owner
	I0311 20:25:10.125012   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 20:25:10.125027   27491 main.go:141] libmachine: (ha-834040-m03) Creating domain...
	I0311 20:25:10.125863   27491 main.go:141] libmachine: (ha-834040-m03) define libvirt domain using xml: 
	I0311 20:25:10.125890   27491 main.go:141] libmachine: (ha-834040-m03) <domain type='kvm'>
	I0311 20:25:10.125902   27491 main.go:141] libmachine: (ha-834040-m03)   <name>ha-834040-m03</name>
	I0311 20:25:10.125915   27491 main.go:141] libmachine: (ha-834040-m03)   <memory unit='MiB'>2200</memory>
	I0311 20:25:10.125928   27491 main.go:141] libmachine: (ha-834040-m03)   <vcpu>2</vcpu>
	I0311 20:25:10.125935   27491 main.go:141] libmachine: (ha-834040-m03)   <features>
	I0311 20:25:10.125945   27491 main.go:141] libmachine: (ha-834040-m03)     <acpi/>
	I0311 20:25:10.125956   27491 main.go:141] libmachine: (ha-834040-m03)     <apic/>
	I0311 20:25:10.125967   27491 main.go:141] libmachine: (ha-834040-m03)     <pae/>
	I0311 20:25:10.125978   27491 main.go:141] libmachine: (ha-834040-m03)     
	I0311 20:25:10.125989   27491 main.go:141] libmachine: (ha-834040-m03)   </features>
	I0311 20:25:10.126003   27491 main.go:141] libmachine: (ha-834040-m03)   <cpu mode='host-passthrough'>
	I0311 20:25:10.126012   27491 main.go:141] libmachine: (ha-834040-m03)   
	I0311 20:25:10.126020   27491 main.go:141] libmachine: (ha-834040-m03)   </cpu>
	I0311 20:25:10.126033   27491 main.go:141] libmachine: (ha-834040-m03)   <os>
	I0311 20:25:10.126045   27491 main.go:141] libmachine: (ha-834040-m03)     <type>hvm</type>
	I0311 20:25:10.126059   27491 main.go:141] libmachine: (ha-834040-m03)     <boot dev='cdrom'/>
	I0311 20:25:10.126070   27491 main.go:141] libmachine: (ha-834040-m03)     <boot dev='hd'/>
	I0311 20:25:10.126094   27491 main.go:141] libmachine: (ha-834040-m03)     <bootmenu enable='no'/>
	I0311 20:25:10.126118   27491 main.go:141] libmachine: (ha-834040-m03)   </os>
	I0311 20:25:10.126132   27491 main.go:141] libmachine: (ha-834040-m03)   <devices>
	I0311 20:25:10.126148   27491 main.go:141] libmachine: (ha-834040-m03)     <disk type='file' device='cdrom'>
	I0311 20:25:10.126166   27491 main.go:141] libmachine: (ha-834040-m03)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/boot2docker.iso'/>
	I0311 20:25:10.126175   27491 main.go:141] libmachine: (ha-834040-m03)       <target dev='hdc' bus='scsi'/>
	I0311 20:25:10.126186   27491 main.go:141] libmachine: (ha-834040-m03)       <readonly/>
	I0311 20:25:10.126195   27491 main.go:141] libmachine: (ha-834040-m03)     </disk>
	I0311 20:25:10.126205   27491 main.go:141] libmachine: (ha-834040-m03)     <disk type='file' device='disk'>
	I0311 20:25:10.126218   27491 main.go:141] libmachine: (ha-834040-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 20:25:10.126238   27491 main.go:141] libmachine: (ha-834040-m03)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/ha-834040-m03.rawdisk'/>
	I0311 20:25:10.126253   27491 main.go:141] libmachine: (ha-834040-m03)       <target dev='hda' bus='virtio'/>
	I0311 20:25:10.126265   27491 main.go:141] libmachine: (ha-834040-m03)     </disk>
	I0311 20:25:10.126276   27491 main.go:141] libmachine: (ha-834040-m03)     <interface type='network'>
	I0311 20:25:10.126288   27491 main.go:141] libmachine: (ha-834040-m03)       <source network='mk-ha-834040'/>
	I0311 20:25:10.126299   27491 main.go:141] libmachine: (ha-834040-m03)       <model type='virtio'/>
	I0311 20:25:10.126317   27491 main.go:141] libmachine: (ha-834040-m03)     </interface>
	I0311 20:25:10.126330   27491 main.go:141] libmachine: (ha-834040-m03)     <interface type='network'>
	I0311 20:25:10.126341   27491 main.go:141] libmachine: (ha-834040-m03)       <source network='default'/>
	I0311 20:25:10.126350   27491 main.go:141] libmachine: (ha-834040-m03)       <model type='virtio'/>
	I0311 20:25:10.126360   27491 main.go:141] libmachine: (ha-834040-m03)     </interface>
	I0311 20:25:10.126369   27491 main.go:141] libmachine: (ha-834040-m03)     <serial type='pty'>
	I0311 20:25:10.126379   27491 main.go:141] libmachine: (ha-834040-m03)       <target port='0'/>
	I0311 20:25:10.126387   27491 main.go:141] libmachine: (ha-834040-m03)     </serial>
	I0311 20:25:10.126398   27491 main.go:141] libmachine: (ha-834040-m03)     <console type='pty'>
	I0311 20:25:10.126411   27491 main.go:141] libmachine: (ha-834040-m03)       <target type='serial' port='0'/>
	I0311 20:25:10.126425   27491 main.go:141] libmachine: (ha-834040-m03)     </console>
	I0311 20:25:10.126438   27491 main.go:141] libmachine: (ha-834040-m03)     <rng model='virtio'>
	I0311 20:25:10.126449   27491 main.go:141] libmachine: (ha-834040-m03)       <backend model='random'>/dev/random</backend>
	I0311 20:25:10.126461   27491 main.go:141] libmachine: (ha-834040-m03)     </rng>
	I0311 20:25:10.126468   27491 main.go:141] libmachine: (ha-834040-m03)     
	I0311 20:25:10.126479   27491 main.go:141] libmachine: (ha-834040-m03)     
	I0311 20:25:10.126494   27491 main.go:141] libmachine: (ha-834040-m03)   </devices>
	I0311 20:25:10.126504   27491 main.go:141] libmachine: (ha-834040-m03) </domain>
	I0311 20:25:10.126514   27491 main.go:141] libmachine: (ha-834040-m03) 
	I0311 20:25:10.133010   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:23:1f:55 in network default
	I0311 20:25:10.133685   27491 main.go:141] libmachine: (ha-834040-m03) Ensuring networks are active...
	I0311 20:25:10.133713   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:10.134445   27491 main.go:141] libmachine: (ha-834040-m03) Ensuring network default is active
	I0311 20:25:10.134720   27491 main.go:141] libmachine: (ha-834040-m03) Ensuring network mk-ha-834040 is active
	I0311 20:25:10.135111   27491 main.go:141] libmachine: (ha-834040-m03) Getting domain xml...
	I0311 20:25:10.135810   27491 main.go:141] libmachine: (ha-834040-m03) Creating domain...
	I0311 20:25:11.341583   27491 main.go:141] libmachine: (ha-834040-m03) Waiting to get IP...
	I0311 20:25:11.343518   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:11.344048   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:11.344079   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:11.344006   28175 retry.go:31] will retry after 213.574303ms: waiting for machine to come up
	I0311 20:25:11.559415   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:11.559785   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:11.559812   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:11.559746   28175 retry.go:31] will retry after 252.339913ms: waiting for machine to come up
	I0311 20:25:11.814155   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:11.814612   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:11.814639   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:11.814562   28175 retry.go:31] will retry after 325.721227ms: waiting for machine to come up
	I0311 20:25:12.142249   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:12.142702   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:12.142731   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:12.142656   28175 retry.go:31] will retry after 552.651246ms: waiting for machine to come up
	I0311 20:25:12.697337   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:12.697772   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:12.697806   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:12.697727   28175 retry.go:31] will retry after 695.62001ms: waiting for machine to come up
	I0311 20:25:13.394518   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:13.394985   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:13.395014   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:13.394946   28175 retry.go:31] will retry after 742.694244ms: waiting for machine to come up
	I0311 20:25:14.139131   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:14.139524   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:14.139550   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:14.139483   28175 retry.go:31] will retry after 834.612641ms: waiting for machine to come up
	I0311 20:25:14.975514   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:14.976013   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:14.976039   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:14.975960   28175 retry.go:31] will retry after 1.136028207s: waiting for machine to come up
	I0311 20:25:16.113828   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:16.114350   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:16.114381   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:16.114284   28175 retry.go:31] will retry after 1.503117438s: waiting for machine to come up
	I0311 20:25:17.618499   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:17.618941   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:17.618964   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:17.618902   28175 retry.go:31] will retry after 1.502353682s: waiting for machine to come up
	I0311 20:25:19.122494   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:19.122914   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:19.122945   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:19.122867   28175 retry.go:31] will retry after 2.128080831s: waiting for machine to come up
	I0311 20:25:21.253320   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:21.253755   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:21.253777   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:21.253713   28175 retry.go:31] will retry after 3.478671111s: waiting for machine to come up
	I0311 20:25:24.733738   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:24.734197   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:24.734222   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:24.734159   28175 retry.go:31] will retry after 3.215581774s: waiting for machine to come up
	I0311 20:25:27.951029   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:27.951466   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:27.951493   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:27.951432   28175 retry.go:31] will retry after 3.808616946s: waiting for machine to come up
	I0311 20:25:31.762631   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.763124   27491 main.go:141] libmachine: (ha-834040-m03) Found IP for machine: 192.168.39.40
	I0311 20:25:31.763158   27491 main.go:141] libmachine: (ha-834040-m03) Reserving static IP address...
	I0311 20:25:31.763171   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has current primary IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.763584   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find host DHCP lease matching {name: "ha-834040-m03", mac: "52:54:00:93:84:f9", ip: "192.168.39.40"} in network mk-ha-834040
	I0311 20:25:31.833638   27491 main.go:141] libmachine: (ha-834040-m03) Reserved static IP address: 192.168.39.40
	I0311 20:25:31.833672   27491 main.go:141] libmachine: (ha-834040-m03) Waiting for SSH to be available...
	I0311 20:25:31.833682   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Getting to WaitForSSH function...
	I0311 20:25:31.836221   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.836645   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:31.836677   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.836871   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Using SSH client type: external
	I0311 20:25:31.836894   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa (-rw-------)
	I0311 20:25:31.836926   27491 main.go:141] libmachine: (ha-834040-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:25:31.836939   27491 main.go:141] libmachine: (ha-834040-m03) DBG | About to run SSH command:
	I0311 20:25:31.836956   27491 main.go:141] libmachine: (ha-834040-m03) DBG | exit 0
	I0311 20:25:31.972823   27491 main.go:141] libmachine: (ha-834040-m03) DBG | SSH cmd err, output: <nil>: 
	I0311 20:25:31.973077   27491 main.go:141] libmachine: (ha-834040-m03) KVM machine creation complete!
	I0311 20:25:31.973365   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetConfigRaw
	I0311 20:25:31.973930   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:31.974126   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:31.974301   27491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 20:25:31.974318   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:25:31.975522   27491 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 20:25:31.975537   27491 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 20:25:31.975543   27491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 20:25:31.975551   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:31.977802   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.978213   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:31.978247   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.978340   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:31.978518   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:31.978692   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:31.978814   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:31.978986   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:31.979209   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:31.979221   27491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 20:25:32.100330   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:25:32.100357   27491 main.go:141] libmachine: Detecting the provisioner...
	I0311 20:25:32.100369   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.103119   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.103466   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.103502   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.103649   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.103850   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.104024   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.104186   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.104345   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:32.104545   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:32.104559   27491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 20:25:32.222259   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 20:25:32.222332   27491 main.go:141] libmachine: found compatible host: buildroot
	I0311 20:25:32.222339   27491 main.go:141] libmachine: Provisioning with buildroot...
	I0311 20:25:32.222347   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetMachineName
	I0311 20:25:32.222546   27491 buildroot.go:166] provisioning hostname "ha-834040-m03"
	I0311 20:25:32.222569   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetMachineName
	I0311 20:25:32.222758   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.225217   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.225618   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.225649   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.225774   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.225956   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.226105   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.226250   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.226411   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:32.226570   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:32.226586   27491 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-834040-m03 && echo "ha-834040-m03" | sudo tee /etc/hostname
	I0311 20:25:32.361275   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040-m03
	
	I0311 20:25:32.361301   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.363908   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.364271   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.364298   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.364536   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.364700   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.364877   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.365044   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.365218   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:32.365393   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:32.365418   27491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-834040-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-834040-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-834040-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:25:32.492045   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:25:32.492074   27491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:25:32.492093   27491 buildroot.go:174] setting up certificates
	I0311 20:25:32.492105   27491 provision.go:84] configureAuth start
	I0311 20:25:32.492117   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetMachineName
	I0311 20:25:32.492383   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:25:32.494867   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.495252   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.495281   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.495391   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.497440   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.497782   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.497805   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.497942   27491 provision.go:143] copyHostCerts
	I0311 20:25:32.497964   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:25:32.497990   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:25:32.497999   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:25:32.498060   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:25:32.498140   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:25:32.498158   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:25:32.498164   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:25:32.498186   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:25:32.498238   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:25:32.498255   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:25:32.498261   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:25:32.498283   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:25:32.498334   27491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.ha-834040-m03 san=[127.0.0.1 192.168.39.40 ha-834040-m03 localhost minikube]
	I0311 20:25:32.678172   27491 provision.go:177] copyRemoteCerts
	I0311 20:25:32.678231   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:25:32.678253   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.680841   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.681160   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.681187   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.681359   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.681533   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.681682   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.681810   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:25:32.773481   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:25:32.773541   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:25:32.803215   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:25:32.803282   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 20:25:32.834068   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:25:32.834146   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 20:25:32.864236   27491 provision.go:87] duration metric: took 372.118438ms to configureAuth
	I0311 20:25:32.864261   27491 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:25:32.864512   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:25:32.864611   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.867260   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.867628   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.867648   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.867855   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.868051   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.868235   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.868397   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.868558   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:32.868715   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:32.868729   27491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:25:33.157772   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:25:33.157796   27491 main.go:141] libmachine: Checking connection to Docker...
	I0311 20:25:33.157804   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetURL
	I0311 20:25:33.159064   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Using libvirt version 6000000
	I0311 20:25:33.161808   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.162234   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.162263   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.162558   27491 main.go:141] libmachine: Docker is up and running!
	I0311 20:25:33.162578   27491 main.go:141] libmachine: Reticulating splines...
	I0311 20:25:33.162586   27491 client.go:171] duration metric: took 23.431512987s to LocalClient.Create
	I0311 20:25:33.162610   27491 start.go:167] duration metric: took 23.431583694s to libmachine.API.Create "ha-834040"
	I0311 20:25:33.162623   27491 start.go:293] postStartSetup for "ha-834040-m03" (driver="kvm2")
	I0311 20:25:33.162636   27491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:25:33.162656   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.162886   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:25:33.162912   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:33.165322   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.165672   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.165694   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.165820   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:33.166000   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.166161   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:33.166295   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:25:33.257602   27491 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:25:33.262493   27491 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:25:33.262519   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:25:33.262590   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:25:33.262663   27491 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:25:33.262675   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:25:33.262748   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:25:33.273859   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:25:33.300785   27491 start.go:296] duration metric: took 138.149269ms for postStartSetup
	I0311 20:25:33.300838   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetConfigRaw
	I0311 20:25:33.301361   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:25:33.304190   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.304574   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.304606   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.304935   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:25:33.305148   27491 start.go:128] duration metric: took 23.594261602s to createHost
	I0311 20:25:33.305169   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:33.307510   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.307859   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.307881   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.308017   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:33.308197   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.308338   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.308436   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:33.308553   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:33.308760   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:33.308774   27491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:25:33.425866   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710188733.401747367
	
	I0311 20:25:33.425888   27491 fix.go:216] guest clock: 1710188733.401747367
	I0311 20:25:33.425895   27491 fix.go:229] Guest: 2024-03-11 20:25:33.401747367 +0000 UTC Remote: 2024-03-11 20:25:33.305158733 +0000 UTC m=+167.994746101 (delta=96.588634ms)
	I0311 20:25:33.425910   27491 fix.go:200] guest clock delta is within tolerance: 96.588634ms
	I0311 20:25:33.425917   27491 start.go:83] releasing machines lock for "ha-834040-m03", held for 23.715182973s
	I0311 20:25:33.425939   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.426192   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:25:33.428684   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.429057   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.429076   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.432079   27491 out.go:177] * Found network options:
	I0311 20:25:33.433502   27491 out.go:177]   - NO_PROXY=192.168.39.128,192.168.39.101
	W0311 20:25:33.434677   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	W0311 20:25:33.434695   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 20:25:33.434706   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.435241   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.435411   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.435498   27491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:25:33.435531   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	W0311 20:25:33.435607   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	W0311 20:25:33.435627   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 20:25:33.435696   27491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:25:33.435713   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:33.438098   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.438247   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.438526   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.438552   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.438619   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:33.438619   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.438640   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.438784   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.438808   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:33.438996   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:33.439004   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.439148   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:33.439181   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:25:33.439249   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:25:33.682921   27491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:25:33.690091   27491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:25:33.690155   27491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:25:33.707619   27491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 20:25:33.707642   27491 start.go:494] detecting cgroup driver to use...
	I0311 20:25:33.707704   27491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:25:33.730253   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:25:33.745202   27491 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:25:33.745254   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:25:33.760286   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:25:33.779199   27491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:25:33.918971   27491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:25:34.100404   27491 docker.go:233] disabling docker service ...
	I0311 20:25:34.100476   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:25:34.117814   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:25:34.131823   27491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:25:34.257499   27491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:25:34.385437   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:25:34.402125   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:25:34.423643   27491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:25:34.423703   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:25:34.435924   27491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:25:34.435973   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:25:34.448545   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:25:34.460316   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:25:34.473287   27491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:25:34.489367   27491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:25:34.504068   27491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 20:25:34.504105   27491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 20:25:34.518731   27491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:25:34.530120   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:25:34.667158   27491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:25:34.824768   27491 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:25:34.824851   27491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:25:34.830068   27491 start.go:562] Will wait 60s for crictl version
	I0311 20:25:34.830116   27491 ssh_runner.go:195] Run: which crictl
	I0311 20:25:34.834225   27491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:25:34.875789   27491 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:25:34.875859   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:25:34.909125   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:25:34.941559   27491 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:25:34.942873   27491 out.go:177]   - env NO_PROXY=192.168.39.128
	I0311 20:25:34.944088   27491 out.go:177]   - env NO_PROXY=192.168.39.128,192.168.39.101
	I0311 20:25:34.945212   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:25:34.947834   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:34.948205   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:34.948230   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:34.948417   27491 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:25:34.952855   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:25:34.967756   27491 mustload.go:65] Loading cluster: ha-834040
	I0311 20:25:34.967978   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:25:34.968213   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:25:34.968246   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:25:34.985591   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0311 20:25:34.986032   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:25:34.986556   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:25:34.986575   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:25:34.986863   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:25:34.987042   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:25:34.988397   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:25:34.988694   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:25:34.988728   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:25:35.002546   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0311 20:25:35.002979   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:25:35.003368   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:25:35.003392   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:25:35.003694   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:25:35.003879   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:25:35.004031   27491 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040 for IP: 192.168.39.40
	I0311 20:25:35.004044   27491 certs.go:194] generating shared ca certs ...
	I0311 20:25:35.004059   27491 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:25:35.004156   27491 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:25:35.004203   27491 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:25:35.004213   27491 certs.go:256] generating profile certs ...
	I0311 20:25:35.004278   27491 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key
	I0311 20:25:35.004301   27491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.78064c2e
	I0311 20:25:35.004314   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.78064c2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.101 192.168.39.40 192.168.39.254]
	I0311 20:25:35.065803   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.78064c2e ...
	I0311 20:25:35.065827   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.78064c2e: {Name:mkbcf692f53b531dbeecd9b17696ae18bbdb46c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:25:35.065977   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.78064c2e ...
	I0311 20:25:35.065988   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.78064c2e: {Name:mk5766d5ea000b5e91e4f884a481b9bb80e2abe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:25:35.066059   27491 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.78064c2e -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt
	I0311 20:25:35.066178   27491 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.78064c2e -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key
	I0311 20:25:35.066294   27491 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key
	I0311 20:25:35.066308   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:25:35.066320   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:25:35.066330   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:25:35.066347   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:25:35.066359   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:25:35.066370   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:25:35.066379   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:25:35.066389   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:25:35.066430   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:25:35.066456   27491 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:25:35.066465   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:25:35.066486   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:25:35.066506   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:25:35.066527   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:25:35.066565   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:25:35.066592   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:25:35.066606   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:25:35.066619   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:25:35.066648   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:25:35.069587   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:25:35.069930   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:25:35.069948   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:25:35.070097   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:25:35.070268   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:25:35.070448   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:25:35.070609   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:25:35.149002   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0311 20:25:35.154911   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0311 20:25:35.171044   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0311 20:25:35.176214   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0311 20:25:35.188414   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0311 20:25:35.193655   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0311 20:25:35.208849   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0311 20:25:35.213796   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0311 20:25:35.229020   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0311 20:25:35.233951   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0311 20:25:35.246159   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0311 20:25:35.251023   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0311 20:25:35.262706   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:25:35.290538   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:25:35.317175   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:25:35.344091   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:25:35.369653   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0311 20:25:35.397081   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 20:25:35.422620   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:25:35.448035   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:25:35.474624   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:25:35.501661   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:25:35.527034   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:25:35.551653   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0311 20:25:35.571923   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0311 20:25:35.592698   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0311 20:25:35.612110   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0311 20:25:35.630901   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0311 20:25:35.649998   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0311 20:25:35.668152   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0311 20:25:35.685805   27491 ssh_runner.go:195] Run: openssl version
	I0311 20:25:35.691628   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:25:35.703675   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:25:35.708403   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:25:35.708444   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:25:35.714336   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:25:35.726282   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:25:35.738193   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:25:35.743029   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:25:35.743071   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:25:35.749065   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:25:35.760967   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:25:35.773661   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:25:35.780867   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:25:35.780911   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:25:35.787005   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:25:35.799108   27491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:25:35.803505   27491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 20:25:35.803558   27491 kubeadm.go:928] updating node {m03 192.168.39.40 8443 v1.28.4 crio true true} ...
	I0311 20:25:35.803652   27491 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-834040-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:25:35.803679   27491 kube-vip.go:101] generating kube-vip config ...
	I0311 20:25:35.803702   27491 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 20:25:35.803733   27491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:25:35.814649   27491 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0311 20:25:35.814686   27491 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0311 20:25:35.825551   27491 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0311 20:25:35.825574   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0311 20:25:35.825574   27491 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0311 20:25:35.825552   27491 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0311 20:25:35.825613   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:25:35.825663   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0311 20:25:35.825612   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0311 20:25:35.825730   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0311 20:25:35.845620   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0311 20:25:35.845677   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0311 20:25:35.845689   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0311 20:25:35.845704   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0311 20:25:35.845707   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0311 20:25:35.845723   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0311 20:25:35.855378   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0311 20:25:35.855406   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0311 20:25:36.840758   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0311 20:25:36.851442   27491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0311 20:25:36.870142   27491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:25:36.888502   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 20:25:36.906199   27491 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0311 20:25:36.910659   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:25:36.924906   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:25:37.066725   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:25:37.086385   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:25:37.086693   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:25:37.086738   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:25:37.101795   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0311 20:25:37.102198   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:25:37.102682   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:25:37.102707   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:25:37.103029   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:25:37.103254   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:25:37.103402   27491 start.go:316] joinCluster: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:25:37.103519   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0311 20:25:37.103533   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:25:37.106529   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:25:37.107026   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:25:37.107047   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:25:37.107302   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:25:37.107446   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:25:37.107655   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:25:37.107815   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:25:37.273645   27491 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:25:37.273686   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s6hbui.0exotx56n62g6204 --discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-834040-m03 --control-plane --apiserver-advertise-address=192.168.39.40 --apiserver-bind-port=8443"
	I0311 20:26:06.268630   27491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s6hbui.0exotx56n62g6204 --discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-834040-m03 --control-plane --apiserver-advertise-address=192.168.39.40 --apiserver-bind-port=8443": (28.994918227s)
	I0311 20:26:06.268667   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0311 20:26:07.004010   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-834040-m03 minikube.k8s.io/updated_at=2024_03_11T20_26_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=ha-834040 minikube.k8s.io/primary=false
	I0311 20:26:07.126195   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-834040-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0311 20:26:07.279907   27491 start.go:318] duration metric: took 30.176500753s to joinCluster
	I0311 20:26:07.279980   27491 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:26:07.281633   27491 out.go:177] * Verifying Kubernetes components...
	I0311 20:26:07.280341   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:26:07.283430   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:26:07.594824   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:26:07.718426   27491 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:26:07.718753   27491 kapi.go:59] client config for ha-834040: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt", KeyFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key", CAFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0311 20:26:07.718830   27491 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.128:8443
	I0311 20:26:07.719076   27491 node_ready.go:35] waiting up to 6m0s for node "ha-834040-m03" to be "Ready" ...
	I0311 20:26:07.719161   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:07.719168   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:07.719179   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:07.719185   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:07.723962   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:08.220047   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:08.220074   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:08.220086   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:08.220094   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:08.225265   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:08.720266   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:08.720283   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:08.720292   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:08.720296   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:08.725024   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:09.220016   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:09.220046   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:09.220058   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:09.220063   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:09.224938   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:09.719994   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:09.720015   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:09.720026   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:09.720032   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:09.765396   27491 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0311 20:26:09.766292   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:10.219625   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:10.219646   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:10.219654   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:10.219658   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:10.223885   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:10.719394   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:10.719434   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:10.719445   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:10.719452   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:10.723231   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:11.219580   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:11.219601   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:11.219611   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:11.219618   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:11.225659   27491 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 20:26:11.720255   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:11.720281   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:11.720293   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:11.720299   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:11.725960   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:12.219276   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:12.219297   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:12.219305   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:12.219308   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:12.223437   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:12.224692   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:12.720135   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:12.720156   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:12.720164   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:12.720168   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:12.724331   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:13.220139   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:13.220158   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:13.220166   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:13.220169   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:13.229448   27491 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0311 20:26:13.719433   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:13.719454   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:13.719466   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:13.719471   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:13.723282   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:14.219258   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:14.219280   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:14.219291   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:14.219300   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:14.223301   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:14.719579   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:14.719599   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:14.719606   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:14.719611   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:14.723604   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:14.724332   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:15.219348   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:15.219391   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:15.219399   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:15.219404   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:15.223615   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:15.719330   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:15.719350   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:15.719356   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:15.719361   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:15.722989   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:16.219193   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:16.219240   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:16.219249   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:16.219253   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:16.223516   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:16.719912   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:16.719936   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:16.719947   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:16.719953   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:16.723779   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:16.724752   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:17.220261   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:17.220281   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:17.220297   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:17.220301   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:17.224452   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:17.719389   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:17.719412   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:17.719423   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:17.719435   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:17.724115   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:18.219225   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:18.219249   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:18.219258   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:18.219262   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:18.223541   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:18.720104   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:18.720125   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:18.720132   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:18.720136   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:18.724024   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:18.724801   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:19.220235   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:19.220258   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:19.220267   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:19.220270   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:19.224218   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:19.719704   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:19.719734   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:19.719742   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:19.719745   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:19.723883   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:20.219768   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:20.219792   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:20.219803   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:20.219810   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:20.223736   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:20.719296   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:20.719315   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:20.719321   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:20.719325   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:20.724046   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:20.725418   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:21.219281   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:21.219305   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:21.219315   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:21.219320   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:21.226663   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:21.719320   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:21.719342   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:21.719351   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:21.719355   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:21.723326   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:22.219803   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:22.219832   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:22.219845   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:22.219852   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:22.223851   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:22.720067   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:22.720101   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:22.720109   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:22.720112   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:22.724228   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:23.220166   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:23.220187   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:23.220195   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:23.220198   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:23.224743   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:23.225828   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:23.720264   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:23.720289   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:23.720301   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:23.720306   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:23.726120   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:24.219346   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:24.219366   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:24.219374   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:24.219378   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:24.223215   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:24.719424   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:24.719445   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:24.719453   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:24.719457   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:24.723982   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:25.219845   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:25.219876   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:25.219887   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:25.219893   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:25.223507   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:25.720233   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:25.720255   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:25.720263   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:25.720266   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:25.724248   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:25.724809   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:26.219361   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:26.219380   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:26.219388   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:26.219391   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:26.223068   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:26.720246   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:26.720266   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:26.720274   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:26.720280   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:26.723719   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:27.219260   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:27.219280   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:27.219293   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:27.219298   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:27.223317   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:27.719222   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:27.719241   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:27.719248   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:27.719252   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:27.723485   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:28.220099   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:28.220120   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:28.220128   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:28.220133   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:28.224190   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:28.224714   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:28.720105   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:28.720133   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:28.720144   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:28.720149   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:28.724865   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:29.219976   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:29.220016   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:29.220027   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:29.220032   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:29.223773   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:29.720030   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:29.720057   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:29.720067   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:29.720074   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:29.723785   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:30.219771   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:30.219801   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:30.219812   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:30.219818   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:30.223844   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:30.719425   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:30.719448   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:30.719458   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:30.719463   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:30.722844   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:30.723585   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:31.219297   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:31.219317   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:31.219325   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:31.219329   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:31.223051   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:31.720148   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:31.720172   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:31.720183   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:31.720190   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:31.724043   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:32.219757   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:32.219777   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:32.219785   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:32.219797   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:32.224264   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:32.720293   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:32.720320   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:32.720330   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:32.720335   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:32.724333   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:32.725270   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:33.219489   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:33.219509   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:33.219516   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:33.219520   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:33.223108   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:33.719288   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:33.719316   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:33.719326   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:33.719334   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:33.723411   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:34.219875   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:34.219902   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:34.219919   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:34.219924   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:34.223753   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:34.719432   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:34.719459   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:34.719465   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:34.719469   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:34.724077   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:35.220180   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:35.220201   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:35.220209   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:35.220213   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:35.224161   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:35.224842   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:35.719959   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:35.719979   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:35.719990   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:35.719994   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:35.724132   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:36.219604   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:36.219622   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:36.219630   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:36.219635   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:36.223713   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:36.719620   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:36.719642   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:36.719651   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:36.719655   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:36.723603   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:37.219862   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:37.219882   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:37.219890   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:37.219893   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:37.223874   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:37.719599   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:37.719624   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:37.719631   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:37.719636   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:37.723918   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:37.724593   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:38.219937   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:38.219956   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:38.219964   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:38.219969   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:38.224012   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:38.719979   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:38.720000   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:38.720012   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:38.720017   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:38.724196   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:39.219349   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:39.219367   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:39.219375   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:39.219379   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:39.223180   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:39.720196   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:39.720220   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:39.720230   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:39.720236   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:39.724488   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:39.725288   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:40.220253   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:40.220274   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:40.220282   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:40.220285   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:40.223715   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:40.720325   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:40.720350   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:40.720361   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:40.720368   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:40.724675   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:41.219428   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:41.219447   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.219455   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.219460   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.223091   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.720270   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:41.720296   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.720321   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.720325   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.723935   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.724645   27491 node_ready.go:49] node "ha-834040-m03" has status "Ready":"True"
	I0311 20:26:41.724662   27491 node_ready.go:38] duration metric: took 34.00556893s for node "ha-834040-m03" to be "Ready" ...
	I0311 20:26:41.724670   27491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:26:41.724719   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:41.724729   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.724752   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.724758   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.732339   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:41.738925   27491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.739011   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d6f2x
	I0311 20:26:41.739037   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.739052   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.739061   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.742759   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.743554   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:41.743569   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.743576   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.743579   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.746999   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.747593   27491 pod_ready.go:92] pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:41.747609   27491 pod_ready.go:81] duration metric: took 8.660607ms for pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.747620   27491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.747675   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kq47h
	I0311 20:26:41.747686   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.747694   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.747699   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.751270   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.752107   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:41.752123   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.752129   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.752133   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.755022   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:26:41.755612   27491 pod_ready.go:92] pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:41.755632   27491 pod_ready.go:81] duration metric: took 8.005858ms for pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.755641   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.755711   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040
	I0311 20:26:41.755723   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.755730   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.755736   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.759060   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.759737   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:41.759750   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.759757   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.759761   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.762560   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:26:41.763231   27491 pod_ready.go:92] pod "etcd-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:41.763245   27491 pod_ready.go:81] duration metric: took 7.591981ms for pod "etcd-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.763253   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.763307   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040-m02
	I0311 20:26:41.763315   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.763322   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.763325   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.766046   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:26:41.766617   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:41.766629   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.766636   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.766640   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.770196   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.770854   27491 pod_ready.go:92] pod "etcd-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:41.770869   27491 pod_ready.go:81] duration metric: took 7.611236ms for pod "etcd-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.770877   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.921250   27491 request.go:629] Waited for 150.293497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040-m03
	I0311 20:26:41.921305   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040-m03
	I0311 20:26:41.921312   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.921322   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.921334   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.924892   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:42.120965   27491 request.go:629] Waited for 195.387698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:42.121037   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:42.121043   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.121050   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.121058   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.125315   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:42.125948   27491 pod_ready.go:92] pod "etcd-ha-834040-m03" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:42.125965   27491 pod_ready.go:81] duration metric: took 355.082755ms for pod "etcd-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.125980   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.321051   27491 request.go:629] Waited for 195.010378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040
	I0311 20:26:42.321132   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040
	I0311 20:26:42.321147   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.321157   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.321167   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.325274   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:42.520284   27491 request.go:629] Waited for 194.26956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:42.520347   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:42.520355   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.520374   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.520380   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.523707   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:42.524370   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:42.524389   27491 pod_ready.go:81] duration metric: took 398.40224ms for pod "kube-apiserver-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.524397   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.720350   27491 request.go:629] Waited for 195.861906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:26:42.720402   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:26:42.720408   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.720419   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.720429   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.724397   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:42.920829   27491 request.go:629] Waited for 195.32487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:42.920896   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:42.920907   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.920917   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.920922   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.925609   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:42.926427   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:42.926443   27491 pod_ready.go:81] duration metric: took 402.039947ms for pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.926452   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.120572   27491 request.go:629] Waited for 194.063187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m03
	I0311 20:26:43.120633   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m03
	I0311 20:26:43.120638   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.120650   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.120653   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.124557   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:43.320910   27491 request.go:629] Waited for 195.361654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:43.320993   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:43.321004   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.321016   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.321025   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.326297   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:43.326735   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040-m03" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:43.326753   27491 pod_ready.go:81] duration metric: took 400.293957ms for pod "kube-apiserver-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.326766   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.520813   27491 request.go:629] Waited for 193.981718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040
	I0311 20:26:43.520893   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040
	I0311 20:26:43.520904   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.520915   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.520925   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.528015   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:43.721119   27491 request.go:629] Waited for 192.380119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:43.721171   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:43.721176   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.721183   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.721187   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.724840   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:43.725550   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:43.725567   27491 pod_ready.go:81] duration metric: took 398.793378ms for pod "kube-controller-manager-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.725580   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.920641   27491 request.go:629] Waited for 194.993846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m02
	I0311 20:26:43.920714   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m02
	I0311 20:26:43.920722   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.920732   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.920757   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.924521   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:44.120469   27491 request.go:629] Waited for 195.282306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:44.120544   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:44.120553   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.120583   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.120594   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.123920   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:44.124683   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:44.124700   27491 pod_ready.go:81] duration metric: took 399.1137ms for pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.124710   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.320791   27491 request.go:629] Waited for 195.99729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m03
	I0311 20:26:44.320857   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m03
	I0311 20:26:44.320868   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.320878   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.320882   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.324830   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:44.521112   27491 request.go:629] Waited for 195.375116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:44.521174   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:44.521181   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.521192   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.521202   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.530131   27491 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0311 20:26:44.530711   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040-m03" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:44.530730   27491 pod_ready.go:81] duration metric: took 406.014637ms for pod "kube-controller-manager-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.530740   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4kkwc" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.720914   27491 request.go:629] Waited for 190.104905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4kkwc
	I0311 20:26:44.720975   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4kkwc
	I0311 20:26:44.720981   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.720988   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.720993   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.725517   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:44.920916   27491 request.go:629] Waited for 194.662377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:44.920962   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:44.920967   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.920974   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.920981   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.924388   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:44.925318   27491 pod_ready.go:92] pod "kube-proxy-4kkwc" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:44.925337   27491 pod_ready.go:81] duration metric: took 394.590294ms for pod "kube-proxy-4kkwc" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.925348   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dsjx4" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.121278   27491 request.go:629] Waited for 195.868347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsjx4
	I0311 20:26:45.121350   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsjx4
	I0311 20:26:45.121358   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.121369   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.121385   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.124993   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:45.321251   27491 request.go:629] Waited for 195.371519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:45.321343   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:45.321356   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.321365   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.321370   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.325375   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:45.326090   27491 pod_ready.go:92] pod "kube-proxy-dsjx4" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:45.326111   27491 pod_ready.go:81] duration metric: took 400.753888ms for pod "kube-proxy-dsjx4" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.326120   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8svv" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.520780   27491 request.go:629] Waited for 194.601973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h8svv
	I0311 20:26:45.520871   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h8svv
	I0311 20:26:45.520887   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.520896   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.520905   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.524578   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:45.720570   27491 request.go:629] Waited for 195.34865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:45.720618   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:45.720623   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.720631   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.720636   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.724637   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:45.725490   27491 pod_ready.go:92] pod "kube-proxy-h8svv" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:45.725514   27491 pod_ready.go:81] duration metric: took 399.386613ms for pod "kube-proxy-h8svv" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.725526   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.920956   27491 request.go:629] Waited for 195.299264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040
	I0311 20:26:45.921014   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040
	I0311 20:26:45.921022   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.921036   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.921045   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.925603   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:46.120671   27491 request.go:629] Waited for 194.365352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:46.120717   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:46.120723   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.120729   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.120752   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.127516   27491 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 20:26:46.128233   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:46.128254   27491 pod_ready.go:81] duration metric: took 402.720062ms for pod "kube-scheduler-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.128275   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.321318   27491 request.go:629] Waited for 192.96989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m02
	I0311 20:26:46.321368   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m02
	I0311 20:26:46.321374   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.321381   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.321387   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.324797   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:46.520767   27491 request.go:629] Waited for 195.35794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:46.520824   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:46.520830   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.520840   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.520849   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.524127   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:46.524788   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:46.524807   27491 pod_ready.go:81] duration metric: took 396.520308ms for pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.524816   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.720810   27491 request.go:629] Waited for 195.935972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m03
	I0311 20:26:46.720876   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m03
	I0311 20:26:46.720881   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.720893   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.720901   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.724026   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:46.921238   27491 request.go:629] Waited for 196.348267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:46.921291   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:46.921296   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.921304   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.921307   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.925055   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:46.925974   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040-m03" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:46.925996   27491 pod_ready.go:81] duration metric: took 401.172976ms for pod "kube-scheduler-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.926009   27491 pod_ready.go:38] duration metric: took 5.201330525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:26:46.926023   27491 api_server.go:52] waiting for apiserver process to appear ...
	I0311 20:26:46.926079   27491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:26:46.947106   27491 api_server.go:72] duration metric: took 39.667095801s to wait for apiserver process to appear ...
	I0311 20:26:46.947130   27491 api_server.go:88] waiting for apiserver healthz status ...
	I0311 20:26:46.947149   27491 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0311 20:26:46.954516   27491 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0311 20:26:46.954585   27491 round_trippers.go:463] GET https://192.168.39.128:8443/version
	I0311 20:26:46.954597   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.954608   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.954621   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.955786   27491 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0311 20:26:46.955851   27491 api_server.go:141] control plane version: v1.28.4
	I0311 20:26:46.955868   27491 api_server.go:131] duration metric: took 8.730483ms to wait for apiserver health ...
	I0311 20:26:46.955880   27491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 20:26:47.120555   27491 request.go:629] Waited for 164.597389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:47.120616   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:47.120623   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:47.120633   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:47.120647   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:47.128008   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:47.136078   27491 system_pods.go:59] 24 kube-system pods found
	I0311 20:26:47.136102   27491 system_pods.go:61] "coredns-5dd5756b68-d6f2x" [ddc7bef4-f6c5-442f-8149-e52a1822986d] Running
	I0311 20:26:47.136109   27491 system_pods.go:61] "coredns-5dd5756b68-kq47h" [f2a70553-206f-4d11-b32f-01ddd30db8ec] Running
	I0311 20:26:47.136114   27491 system_pods.go:61] "etcd-ha-834040" [76aef9d7-e8f7-4675-92db-614a3723f8b0] Running
	I0311 20:26:47.136120   27491 system_pods.go:61] "etcd-ha-834040-m02" [c87b59c2-5dcd-4217-9d64-1eab2ecf0075] Running
	I0311 20:26:47.136125   27491 system_pods.go:61] "etcd-ha-834040-m03" [554134f9-440a-4fce-8af9-f25a1a336610] Running
	I0311 20:26:47.136130   27491 system_pods.go:61] "kindnet-bw656" [edb13135-e5b5-46df-922e-5ebfb444c219] Running
	I0311 20:26:47.136147   27491 system_pods.go:61] "kindnet-cf888" [a0eb1481-fce7-4ede-9727-28ff9f3475b1] Running
	I0311 20:26:47.136154   27491 system_pods.go:61] "kindnet-rqcq6" [7c368ac4-0fa3-4185-98a7-40df481939ee] Running
	I0311 20:26:47.136157   27491 system_pods.go:61] "kube-apiserver-ha-834040" [f1a21652-f5f0-4ff4-a181-9719fbb72320] Running
	I0311 20:26:47.136160   27491 system_pods.go:61] "kube-apiserver-ha-834040-m02" [eaadd58d-4c00-4dd8-94fe-2d28bed895f5] Running
	I0311 20:26:47.136163   27491 system_pods.go:61] "kube-apiserver-ha-834040-m03" [60f94aa4-4332-4f32-b9ed-326492680654] Running
	I0311 20:26:47.136166   27491 system_pods.go:61] "kube-controller-manager-ha-834040" [48fff24f-f490-4cad-ae02-67dd35208820] Running
	I0311 20:26:47.136172   27491 system_pods.go:61] "kube-controller-manager-ha-834040-m02" [a3418676-a178-4f18-accd-cbc835234b6f] Running
	I0311 20:26:47.136175   27491 system_pods.go:61] "kube-controller-manager-ha-834040-m03" [44b609b0-feee-4b2d-a414-258c11a66810] Running
	I0311 20:26:47.136178   27491 system_pods.go:61] "kube-proxy-4kkwc" [bd3491fa-75a9-46ff-b61e-a818c82f1fc6] Running
	I0311 20:26:47.136180   27491 system_pods.go:61] "kube-proxy-dsjx4" [b8dccd4a-d900-4c56-8861-4c19dbda4a31] Running
	I0311 20:26:47.136183   27491 system_pods.go:61] "kube-proxy-h8svv" [3a7973ca-9a35-4190-8845-cc685619b093] Running
	I0311 20:26:47.136188   27491 system_pods.go:61] "kube-scheduler-ha-834040" [665bbcfc-d34c-46f7-8c3c-73380466fb35] Running
	I0311 20:26:47.136191   27491 system_pods.go:61] "kube-scheduler-ha-834040-m02" [3429847c-a119-4dba-bcfc-f41e6bd8b351] Running
	I0311 20:26:47.136196   27491 system_pods.go:61] "kube-scheduler-ha-834040-m03" [84aad696-7a60-4242-a214-17c9e4cf2bf6] Running
	I0311 20:26:47.136199   27491 system_pods.go:61] "kube-vip-ha-834040" [d539e386-31f6-4b7c-9e36-8a413b82a4a8] Running
	I0311 20:26:47.136202   27491 system_pods.go:61] "kube-vip-ha-834040-m02" [59d64aa5-94ab-44d5-a42e-5453eb2c0b37] Running
	I0311 20:26:47.136205   27491 system_pods.go:61] "kube-vip-ha-834040-m03" [6a95c6cb-4f07-49d7-abaa-facdc4b0e799] Running
	I0311 20:26:47.136208   27491 system_pods.go:61] "storage-provisioner" [bbc64228-86a0-4e0c-9eef-f4644439ca13] Running
	I0311 20:26:47.136213   27491 system_pods.go:74] duration metric: took 180.324544ms to wait for pod list to return data ...
	I0311 20:26:47.136222   27491 default_sa.go:34] waiting for default service account to be created ...
	I0311 20:26:47.320676   27491 request.go:629] Waited for 184.386345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0311 20:26:47.320767   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0311 20:26:47.320779   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:47.320789   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:47.320799   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:47.326192   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:47.326412   27491 default_sa.go:45] found service account: "default"
	I0311 20:26:47.326429   27491 default_sa.go:55] duration metric: took 190.197475ms for default service account to be created ...
	I0311 20:26:47.326438   27491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 20:26:47.520810   27491 request.go:629] Waited for 194.258488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:47.520858   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:47.520863   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:47.520871   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:47.520875   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:47.528205   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:47.535274   27491 system_pods.go:86] 24 kube-system pods found
	I0311 20:26:47.535299   27491 system_pods.go:89] "coredns-5dd5756b68-d6f2x" [ddc7bef4-f6c5-442f-8149-e52a1822986d] Running
	I0311 20:26:47.535304   27491 system_pods.go:89] "coredns-5dd5756b68-kq47h" [f2a70553-206f-4d11-b32f-01ddd30db8ec] Running
	I0311 20:26:47.535308   27491 system_pods.go:89] "etcd-ha-834040" [76aef9d7-e8f7-4675-92db-614a3723f8b0] Running
	I0311 20:26:47.535312   27491 system_pods.go:89] "etcd-ha-834040-m02" [c87b59c2-5dcd-4217-9d64-1eab2ecf0075] Running
	I0311 20:26:47.535316   27491 system_pods.go:89] "etcd-ha-834040-m03" [554134f9-440a-4fce-8af9-f25a1a336610] Running
	I0311 20:26:47.535320   27491 system_pods.go:89] "kindnet-bw656" [edb13135-e5b5-46df-922e-5ebfb444c219] Running
	I0311 20:26:47.535324   27491 system_pods.go:89] "kindnet-cf888" [a0eb1481-fce7-4ede-9727-28ff9f3475b1] Running
	I0311 20:26:47.535327   27491 system_pods.go:89] "kindnet-rqcq6" [7c368ac4-0fa3-4185-98a7-40df481939ee] Running
	I0311 20:26:47.535331   27491 system_pods.go:89] "kube-apiserver-ha-834040" [f1a21652-f5f0-4ff4-a181-9719fbb72320] Running
	I0311 20:26:47.535334   27491 system_pods.go:89] "kube-apiserver-ha-834040-m02" [eaadd58d-4c00-4dd8-94fe-2d28bed895f5] Running
	I0311 20:26:47.535338   27491 system_pods.go:89] "kube-apiserver-ha-834040-m03" [60f94aa4-4332-4f32-b9ed-326492680654] Running
	I0311 20:26:47.535345   27491 system_pods.go:89] "kube-controller-manager-ha-834040" [48fff24f-f490-4cad-ae02-67dd35208820] Running
	I0311 20:26:47.535349   27491 system_pods.go:89] "kube-controller-manager-ha-834040-m02" [a3418676-a178-4f18-accd-cbc835234b6f] Running
	I0311 20:26:47.535354   27491 system_pods.go:89] "kube-controller-manager-ha-834040-m03" [44b609b0-feee-4b2d-a414-258c11a66810] Running
	I0311 20:26:47.535358   27491 system_pods.go:89] "kube-proxy-4kkwc" [bd3491fa-75a9-46ff-b61e-a818c82f1fc6] Running
	I0311 20:26:47.535365   27491 system_pods.go:89] "kube-proxy-dsjx4" [b8dccd4a-d900-4c56-8861-4c19dbda4a31] Running
	I0311 20:26:47.535369   27491 system_pods.go:89] "kube-proxy-h8svv" [3a7973ca-9a35-4190-8845-cc685619b093] Running
	I0311 20:26:47.535375   27491 system_pods.go:89] "kube-scheduler-ha-834040" [665bbcfc-d34c-46f7-8c3c-73380466fb35] Running
	I0311 20:26:47.535379   27491 system_pods.go:89] "kube-scheduler-ha-834040-m02" [3429847c-a119-4dba-bcfc-f41e6bd8b351] Running
	I0311 20:26:47.535391   27491 system_pods.go:89] "kube-scheduler-ha-834040-m03" [84aad696-7a60-4242-a214-17c9e4cf2bf6] Running
	I0311 20:26:47.535397   27491 system_pods.go:89] "kube-vip-ha-834040" [d539e386-31f6-4b7c-9e36-8a413b82a4a8] Running
	I0311 20:26:47.535400   27491 system_pods.go:89] "kube-vip-ha-834040-m02" [59d64aa5-94ab-44d5-a42e-5453eb2c0b37] Running
	I0311 20:26:47.535404   27491 system_pods.go:89] "kube-vip-ha-834040-m03" [6a95c6cb-4f07-49d7-abaa-facdc4b0e799] Running
	I0311 20:26:47.535407   27491 system_pods.go:89] "storage-provisioner" [bbc64228-86a0-4e0c-9eef-f4644439ca13] Running
	I0311 20:26:47.535415   27491 system_pods.go:126] duration metric: took 208.971727ms to wait for k8s-apps to be running ...
	I0311 20:26:47.535423   27491 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 20:26:47.535469   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:26:47.553919   27491 system_svc.go:56] duration metric: took 18.485702ms WaitForService to wait for kubelet
	I0311 20:26:47.553950   27491 kubeadm.go:576] duration metric: took 40.273942997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:26:47.553971   27491 node_conditions.go:102] verifying NodePressure condition ...
	I0311 20:26:47.720295   27491 request.go:629] Waited for 166.25817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes
	I0311 20:26:47.720345   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes
	I0311 20:26:47.720353   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:47.720365   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:47.720371   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:47.724896   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:47.726176   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:26:47.726204   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:26:47.726216   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:26:47.726221   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:26:47.726226   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:26:47.726229   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:26:47.726233   27491 node_conditions.go:105] duration metric: took 172.255909ms to run NodePressure ...
	I0311 20:26:47.726246   27491 start.go:240] waiting for startup goroutines ...
	I0311 20:26:47.726268   27491 start.go:254] writing updated cluster config ...
	I0311 20:26:47.726546   27491 ssh_runner.go:195] Run: rm -f paused
	I0311 20:26:47.778492   27491 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 20:26:47.780590   27491 out.go:177] * Done! kubectl is now configured to use "ha-834040" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.767434245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189018767411315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5ecb490-0b80-4ec0-b5f5-7c6446a81103 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.768199025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50f762fd-c206-4db4-ba4c-cf82cb801ad8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.768252758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50f762fd-c206-4db4-ba4c-cf82cb801ad8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.768563959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710188810909650029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710188690043049602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710188690043182789,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626343031244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626308762017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:651df645b80859aac3940b6c46f612b7dfa6e63196eea16e71a4699e1dacd90d,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710188625312421373,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239,PodSandboxId:97f4eaedf7381336de1f270c1327a82bac27c26c771a5df3e32cc259ef113390,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710188623496900367,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710188621618767385,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de629d59c426e67a341320405ba6e2db0a43a77097e61b6123f4636359ee3412,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710188602988167367,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710188600841991262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: et
cd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8,PodSandboxId:ba0d4adac5c720e3d7577394479b4143283e2c9ddcc61e2ab1e57dcd4664342a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710188600790600914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710188600790390415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944,PodSandboxId:1d3a02c48636bed52fd7f56fa9670f0a3c8e5e4f596b89faa29081f66f463fc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710188600668037923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50f762fd-c206-4db4-ba4c-cf82cb801ad8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.814818121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69f2f75c-b2ab-40fe-9e44-63605d6b49a7 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.814890639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69f2f75c-b2ab-40fe-9e44-63605d6b49a7 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.816835120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b00b39a1-c3ba-49d7-aefd-2bf874c320c1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.817379950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189018817350634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b00b39a1-c3ba-49d7-aefd-2bf874c320c1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.818009919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c35eea29-cf4c-454f-88d4-6ef4f06c4f0e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.818150752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c35eea29-cf4c-454f-88d4-6ef4f06c4f0e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.818441411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710188810909650029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710188690043049602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710188690043182789,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626343031244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626308762017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:651df645b80859aac3940b6c46f612b7dfa6e63196eea16e71a4699e1dacd90d,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710188625312421373,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239,PodSandboxId:97f4eaedf7381336de1f270c1327a82bac27c26c771a5df3e32cc259ef113390,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710188623496900367,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710188621618767385,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de629d59c426e67a341320405ba6e2db0a43a77097e61b6123f4636359ee3412,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710188602988167367,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710188600841991262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: et
cd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8,PodSandboxId:ba0d4adac5c720e3d7577394479b4143283e2c9ddcc61e2ab1e57dcd4664342a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710188600790600914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710188600790390415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944,PodSandboxId:1d3a02c48636bed52fd7f56fa9670f0a3c8e5e4f596b89faa29081f66f463fc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710188600668037923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c35eea29-cf4c-454f-88d4-6ef4f06c4f0e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.859335616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=192df8cd-76ce-4f6d-9a9d-231c36b694da name=/runtime.v1.RuntimeService/Version
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.859431918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=192df8cd-76ce-4f6d-9a9d-231c36b694da name=/runtime.v1.RuntimeService/Version
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.860687317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1565bf3-78f7-4d84-8257-73a8ad722578 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.861231051Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189018861206426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1565bf3-78f7-4d84-8257-73a8ad722578 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.861775388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e7550d8-8972-4c80-a9ae-a69be877d7b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.861858550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e7550d8-8972-4c80-a9ae-a69be877d7b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.862217103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710188810909650029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710188690043049602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710188690043182789,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626343031244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626308762017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:651df645b80859aac3940b6c46f612b7dfa6e63196eea16e71a4699e1dacd90d,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710188625312421373,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239,PodSandboxId:97f4eaedf7381336de1f270c1327a82bac27c26c771a5df3e32cc259ef113390,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710188623496900367,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710188621618767385,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de629d59c426e67a341320405ba6e2db0a43a77097e61b6123f4636359ee3412,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710188602988167367,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710188600841991262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: et
cd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8,PodSandboxId:ba0d4adac5c720e3d7577394479b4143283e2c9ddcc61e2ab1e57dcd4664342a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710188600790600914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710188600790390415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944,PodSandboxId:1d3a02c48636bed52fd7f56fa9670f0a3c8e5e4f596b89faa29081f66f463fc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710188600668037923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e7550d8-8972-4c80-a9ae-a69be877d7b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.902833365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b412c743-7451-4364-85a4-9d91cb775cc7 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.903327094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b412c743-7451-4364-85a4-9d91cb775cc7 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.904741068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2ca60dd-3dfd-4886-b090-e39185ccbfd1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.905300851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189018905277527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2ca60dd-3dfd-4886-b090-e39185ccbfd1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.905978887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41f1574d-d5cf-4b36-b5a9-f396eea29fea name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.906039156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41f1574d-d5cf-4b36-b5a9-f396eea29fea name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:30:18 ha-834040 crio[675]: time="2024-03-11 20:30:18.906361669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710188810909650029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710188690043049602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710188690043182789,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626343031244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626308762017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:651df645b80859aac3940b6c46f612b7dfa6e63196eea16e71a4699e1dacd90d,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710188625312421373,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239,PodSandboxId:97f4eaedf7381336de1f270c1327a82bac27c26c771a5df3e32cc259ef113390,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710188623496900367,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710188621618767385,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de629d59c426e67a341320405ba6e2db0a43a77097e61b6123f4636359ee3412,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710188602988167367,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710188600841991262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: et
cd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8,PodSandboxId:ba0d4adac5c720e3d7577394479b4143283e2c9ddcc61e2ab1e57dcd4664342a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710188600790600914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710188600790390415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944,PodSandboxId:1d3a02c48636bed52fd7f56fa9670f0a3c8e5e4f596b89faa29081f66f463fc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710188600668037923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41f1574d-d5cf-4b36-b5a9-f396eea29fea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	251e9f2d7df5c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   417164b9b0cb4       busybox-5b5d89c9d6-d62cw
	afc1d1d2e164d       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  1                   dcb18e5f12de1       kube-vip-ha-834040
	48ff55cc7dd7c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       1                   023c0d7d16ddd       storage-provisioner
	7be345e0f22ca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   4860ab9172968       coredns-5dd5756b68-kq47h
	6926d89f93fa7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   94384bd2f8c98       coredns-5dd5756b68-d6f2x
	651df645b8085       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       0                   023c0d7d16ddd       storage-provisioner
	bde1337579436       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    6 minutes ago       Running             kindnet-cni               0                   97f4eaedf7381       kindnet-bw656
	ab5ff27a1d4cb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      6 minutes ago       Running             kube-proxy                0                   a9e018e6df6e7       kube-proxy-h8svv
	de629d59c426e       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Exited              kube-vip                  0                   dcb18e5f12de1       kube-vip-ha-834040
	4395af23a1752       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      6 minutes ago       Running             etcd                      0                   3e8bbccfbf388       etcd-ha-834040
	abfa6c7eaf9de       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      6 minutes ago       Running             kube-controller-manager   0                   ba0d4adac5c72       kube-controller-manager-ha-834040
	4b273e6fedf1a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      6 minutes ago       Running             kube-scheduler            0                   85d4eab358f29       kube-scheduler-ha-834040
	d2c6fc6f4ca02       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      6 minutes ago       Running             kube-apiserver            0                   1d3a02c48636b       kube-apiserver-ha-834040
	
	
	==> coredns [6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc] <==
	[INFO] 10.244.0.4:50316 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110698s
	[INFO] 10.244.1.2:34160 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001878377s
	[INFO] 10.244.1.2:53820 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100107s
	[INFO] 10.244.1.2:35233 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128279s
	[INFO] 10.244.1.2:40701 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113335s
	[INFO] 10.244.1.2:51999 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000206194s
	[INFO] 10.244.2.2:36958 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164814s
	[INFO] 10.244.2.2:39443 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195028s
	[INFO] 10.244.2.2:39519 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001598365s
	[INFO] 10.244.2.2:57263 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000263661s
	[INFO] 10.244.0.4:58360 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097628s
	[INFO] 10.244.0.4:34351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182428s
	[INFO] 10.244.1.2:54939 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000278877s
	[INFO] 10.244.1.2:37033 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194177s
	[INFO] 10.244.1.2:37510 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190608s
	[INFO] 10.244.2.2:41536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108104s
	[INFO] 10.244.2.2:41561 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122082s
	[INFO] 10.244.0.4:42660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221566s
	[INFO] 10.244.0.4:53159 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000188136s
	[INFO] 10.244.0.4:41046 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100215s
	[INFO] 10.244.0.4:50387 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176539s
	[INFO] 10.244.1.2:54773 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120996s
	[INFO] 10.244.1.2:51952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119653s
	[INFO] 10.244.2.2:59116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134078s
	[INFO] 10.244.2.2:47917 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128001s
	
	
	==> coredns [7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450] <==
	[INFO] 10.244.0.4:51252 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.003843725s
	[INFO] 10.244.0.4:37817 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010513423s
	[INFO] 10.244.1.2:41192 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00179634s
	[INFO] 10.244.2.2:57444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215144s
	[INFO] 10.244.2.2:56538 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00210828s
	[INFO] 10.244.0.4:58455 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118841s
	[INFO] 10.244.0.4:49345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003481053s
	[INFO] 10.244.0.4:56716 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187984s
	[INFO] 10.244.0.4:35412 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160258s
	[INFO] 10.244.1.2:56957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150599s
	[INFO] 10.244.1.2:53790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450755s
	[INFO] 10.244.1.2:53927 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207107s
	[INFO] 10.244.2.2:55011 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744357s
	[INFO] 10.244.2.2:59931 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000316475s
	[INFO] 10.244.2.2:52694 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184762s
	[INFO] 10.244.2.2:51472 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080603s
	[INFO] 10.244.0.4:33893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185444s
	[INFO] 10.244.0.4:54135 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072181s
	[INFO] 10.244.1.2:36921 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189721s
	[INFO] 10.244.2.2:60407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015337s
	[INFO] 10.244.2.2:45057 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177157s
	[INFO] 10.244.1.2:52652 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273969s
	[INFO] 10.244.1.2:41042 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160192s
	[INFO] 10.244.2.2:55743 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233222s
	[INFO] 10.244.2.2:43090 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000228333s
	
	
	==> describe nodes <==
	Name:               ha-834040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T20_23_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:30:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:27:01 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:27:01 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:27:01 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:27:01 +0000   Mon, 11 Mar 2024 20:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-834040
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6cb0aa00d5a4d388da50e20e0a9ccef
	  System UUID:                f6cb0aa0-0d5a-4d38-8da5-0e20e0a9ccef
	  Boot ID:                    47b6723c-3999-42a9-a19b-9f1c67aaecb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-d62cw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 coredns-5dd5756b68-d6f2x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m39s
	  kube-system                 coredns-5dd5756b68-kq47h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m39s
	  kube-system                 etcd-ha-834040                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m52s
	  kube-system                 kindnet-bw656                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m39s
	  kube-system                 kube-apiserver-ha-834040             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 kube-controller-manager-ha-834040    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 kube-proxy-h8svv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 kube-scheduler-ha-834040             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 kube-vip-ha-834040                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m37s  kube-proxy       
	  Normal  Starting                 6m52s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m52s  kubelet          Node ha-834040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m52s  kubelet          Node ha-834040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m52s  kubelet          Node ha-834040 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m52s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m40s  node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal  NodeReady                6m35s  kubelet          Node ha-834040 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal  RegisteredNode           3m58s  node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	
	
	Name:               ha-834040-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_24_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:24:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:27:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 11 Mar 2024 20:27:08 +0000   Mon, 11 Mar 2024 20:28:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 11 Mar 2024 20:27:08 +0000   Mon, 11 Mar 2024 20:28:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 11 Mar 2024 20:27:08 +0000   Mon, 11 Mar 2024 20:28:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 11 Mar 2024 20:27:08 +0000   Mon, 11 Mar 2024 20:28:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-834040-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d932b403e92c478480bfc9080f018c7a
	  System UUID:                d932b403-e92c-4784-80bf-c9080f018c7a
	  Boot ID:                    21b79699-e0c8-443f-8316-dd2d55446b7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-h9jx5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-834040-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m40s
	  kube-system                 kindnet-rqcq6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m41s
	  kube-system                 kube-apiserver-ha-834040-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-controller-manager-ha-834040-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-proxy-dsjx4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 kube-scheduler-ha-834040-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-vip-ha-834040-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m21s  kube-proxy       
	  Normal  RegisteredNode  5m40s  node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode  5m12s  node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode  3m58s  node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  NodeNotReady    108s   node-controller  Node ha-834040-m02 status is now: NodeNotReady
	
	
	Name:               ha-834040-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_26_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:26:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:30:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:27:03 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:27:03 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:27:03 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:27:03 +0000   Mon, 11 Mar 2024 20:26:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-834040-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6ff34b6936e4e2fada32a020c96ac8f
	  System UUID:                e6ff34b6-936e-4e2f-ada3-2a020c96ac8f
	  Boot ID:                    d1e0d295-4977-4e81-8d43-f63a102c1a44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-mx5b4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-834040-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kindnet-cf888                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m17s
	  kube-system                 kube-apiserver-ha-834040-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-ha-834040-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-proxy-4kkwc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-scheduler-ha-834040-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-vip-ha-834040-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m1s   kube-proxy       
	  Normal  RegisteredNode  4m17s  node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal  RegisteredNode  4m15s  node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal  RegisteredNode  3m58s  node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	
	
	Name:               ha-834040-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_27_30_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:27:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:30:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:28:00 +0000   Mon, 11 Mar 2024 20:27:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:28:00 +0000   Mon, 11 Mar 2024 20:27:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:28:00 +0000   Mon, 11 Mar 2024 20:27:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:28:00 +0000   Mon, 11 Mar 2024 20:27:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-834040-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 01d975a4d97b45958b00e8cebd68bf34
	  System UUID:                01d975a4-d97b-4595-8b00-e8cebd68bf34
	  Boot ID:                    20c51306-7a45-415f-959d-65a8140505c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gdbjb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m50s
	  kube-system                 kube-proxy-wc99r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m50s (x5 over 2m51s)  kubelet          Node ha-834040-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x5 over 2m51s)  kubelet          Node ha-834040-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x5 over 2m51s)  kubelet          Node ha-834040-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal  RegisteredNode           2m45s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-834040-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar11 20:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051930] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043288] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.541344] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.468506] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[Mar11 20:23] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.744921] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.061444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067061] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.157638] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.161215] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.262542] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +5.181266] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.062600] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.584713] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.482512] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.376234] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.096131] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.894025] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.119032] kauditd_printk_skb: 58 callbacks suppressed
	[Mar11 20:24] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1] <==
	{"level":"warn","ts":"2024-03-11T20:30:19.199581Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.21344Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.220525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.227544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.231706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.23488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.24562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.25191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.253791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.254649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.257936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.264394Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.268548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.27704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.28794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.293466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.296823Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.301524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.30818Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.314192Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.316892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.327519Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"49bf4fb7f029b9bd","rtt":"10.73367ms","error":"dial tcp 192.168.39.101:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-11T20:30:19.32763Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"49bf4fb7f029b9bd","rtt":"2.493758ms","error":"dial tcp 192.168.39.101:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-11T20:30:19.328248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:30:19.355236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:30:19 up 7 min,  0 users,  load average: 0.47, 0.45, 0.24
	Linux ha-834040 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239] <==
	I0311 20:29:44.088633       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:29:54.099514       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:29:54.099566       1 main.go:227] handling current node
	I0311 20:29:54.099579       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:29:54.099585       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:29:54.099718       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:29:54.099727       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:29:54.099787       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:29:54.099820       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:30:04.105670       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:30:04.105714       1 main.go:227] handling current node
	I0311 20:30:04.105736       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:30:04.105741       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:30:04.105878       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:30:04.105912       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:30:04.105969       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:30:04.106001       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:30:14.114578       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:30:14.114926       1 main.go:227] handling current node
	I0311 20:30:14.114987       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:30:14.115018       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:30:14.115254       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:30:14.115302       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:30:14.115403       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:30:14.115433       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944] <==
	Trace[1545161709]: [4.626139808s] [4.626139808s] END
	I0311 20:24:53.765146       1 trace.go:236] Trace[401973741]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:814b6913-b89b-4423-b345-d52032cab5fb,client:192.168.39.101,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:24:46.625) (total time: 7139ms):
	Trace[401973741]: [7.139891131s] [7.139891131s] END
	I0311 20:24:53.766648       1 trace.go:236] Trace[2066086163]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7d46111c-f595-403d-8bcc-203f0f24e52c,client:192.168.39.101,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:24:47.542) (total time: 6223ms):
	Trace[2066086163]: [6.223644822s] [6.223644822s] END
	I0311 20:24:53.767465       1 trace.go:236] Trace[186188628]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ef41ab4c-daf8-4540-9d88-ee64ffbbd3c5,client:192.168.39.101,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:24:47.541) (total time: 6225ms):
	Trace[186188628]: [6.22596821s] [6.22596821s] END
	I0311 20:24:53.767772       1 trace.go:236] Trace[448873650]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:709c42c9-1ef3-4c8c-89b3-f722acb945d1,client:192.168.39.101,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:24:48.981) (total time: 4786ms):
	Trace[448873650]: [4.786497636s] [4.786497636s] END
	I0311 20:27:30.140778       1 trace.go:236] Trace[947205264]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:926c942a-690c-476a-811e-59e2651730cc,client:192.168.39.44,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:27:29.602) (total time: 538ms):
	Trace[947205264]: ["Create etcd3" audit-id:926c942a-690c-476a-811e-59e2651730cc,key:/events/default/ha-834040-m04.17bbcfb2408ab3c3,type:*core.Event,resource:events 537ms (20:27:29.602)
	Trace[947205264]:  ---"TransformToStorage succeeded" 230ms (20:27:29.833)
	Trace[947205264]:  ---"Txn call succeeded" 307ms (20:27:30.140)]
	Trace[947205264]: [538.03371ms] [538.03371ms] END
	I0311 20:27:30.142812       1 trace.go:236] Trace[928836211]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:3a61f083-81f6-4428-8739-11361a1aa52b,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (11-Mar-2024 20:27:29.622) (total time: 519ms):
	Trace[928836211]: [519.864976ms] [519.864976ms] END
	I0311 20:27:30.143832       1 trace.go:236] Trace[1655549944]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f198e904-57e8-4ad7-a738-8d0e832e0ba8,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (11-Mar-2024 20:27:29.626) (total time: 516ms):
	Trace[1655549944]: [516.9239ms] [516.9239ms] END
	I0311 20:27:30.154288       1 trace.go:236] Trace[483810055]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:51e0cce3-1513-4212-962f-c083ba484c2c,client:192.168.39.128,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-834040-m04,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:ttl-controller,verb:PATCH (11-Mar-2024 20:27:29.624) (total time: 529ms):
	Trace[483810055]: ["GuaranteedUpdate etcd3" audit-id:51e0cce3-1513-4212-962f-c083ba484c2c,key:/minions/ha-834040-m04,type:*core.Node,resource:nodes 529ms (20:27:29.624)
	Trace[483810055]:  ---"Txn call completed" 204ms (20:27:29.833)
	Trace[483810055]:  ---"Txn call completed" 319ms (20:27:30.153)]
	Trace[483810055]: ---"About to apply patch" 205ms (20:27:29.833)
	Trace[483810055]: ---"Object stored in database" 319ms (20:27:30.154)
	Trace[483810055]: [529.345251ms] [529.345251ms] END
	
	
	==> kube-controller-manager [abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8] <==
	I0311 20:26:49.318132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="29.194831ms"
	I0311 20:26:49.319163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="101.457µs"
	I0311 20:26:51.039886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.283516ms"
	I0311 20:26:51.040448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.855µs"
	I0311 20:26:51.378805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="90.609µs"
	I0311 20:26:51.614801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.290793ms"
	I0311 20:26:51.614915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.225µs"
	I0311 20:26:51.659412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.06168ms"
	I0311 20:26:51.659779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="83.897µs"
	E0311 20:27:27.977493       1 certificate_controller.go:146] Sync csr-pnzbp failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-pnzbp": the object has been modified; please apply your changes to the latest version and try again
	E0311 20:27:27.982824       1 certificate_controller.go:146] Sync csr-pnzbp failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-pnzbp": the object has been modified; please apply your changes to the latest version and try again
	I0311 20:27:29.595787       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-834040-m04\" does not exist"
	I0311 20:27:29.840525       1 range_allocator.go:380] "Set node PodCIDR" node="ha-834040-m04" podCIDRs=["10.244.3.0/24"]
	I0311 20:27:30.148695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wc99r"
	I0311 20:27:30.148756       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gdbjb"
	I0311 20:27:30.277285       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-jckf6"
	I0311 20:27:30.279557       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-lhqdl"
	I0311 20:27:30.420455       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-btkbp"
	I0311 20:27:30.432844       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-9ksnv"
	I0311 20:27:34.246405       1 event.go:307] "Event occurred" object="ha-834040-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller"
	I0311 20:27:34.266245       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-834040-m04"
	I0311 20:27:37.380506       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-834040-m04"
	I0311 20:28:31.258348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-834040-m04"
	I0311 20:28:31.440290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.30239ms"
	I0311 20:28:31.441311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="141.98µs"
	
	
	==> kube-proxy [ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25] <==
	I0311 20:23:41.879943       1 server_others.go:69] "Using iptables proxy"
	I0311 20:23:41.908431       1 node.go:141] Successfully retrieved node IP: 192.168.39.128
	I0311 20:23:42.020698       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:23:42.020756       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:23:42.036364       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:23:42.036526       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:23:42.037206       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:23:42.037316       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:23:42.042327       1 config.go:315] "Starting node config controller"
	I0311 20:23:42.042430       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:23:42.048456       1 config.go:188] "Starting service config controller"
	I0311 20:23:42.048469       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:23:42.048491       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:23:42.048502       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:23:42.143225       1 shared_informer.go:318] Caches are synced for node config
	I0311 20:23:42.148691       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:23:42.148672       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861] <==
	W0311 20:23:24.248261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 20:23:24.248399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 20:23:24.253937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 20:23:24.253997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 20:23:25.214421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:23:25.214471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 20:23:25.245746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 20:23:25.245830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 20:23:25.310965       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 20:23:25.311141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 20:23:25.339716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 20:23:25.339771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 20:23:25.418715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 20:23:25.418795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 20:23:25.483360       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 20:23:25.484056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 20:23:25.664472       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 20:23:25.664528       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0311 20:23:28.126417       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0311 20:26:48.785891       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-h9jx5\": pod busybox-5b5d89c9d6-h9jx5 is already assigned to node \"ha-834040-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-h9jx5" node="ha-834040-m02"
	E0311 20:26:48.793180       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 34e02cf4-79e4-4bbc-ae43-c0f5ef80637a(default/busybox-5b5d89c9d6-h9jx5) wasn't assumed so cannot be forgotten"
	E0311 20:26:48.793621       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-h9jx5\": pod busybox-5b5d89c9d6-h9jx5 is already assigned to node \"ha-834040-m02\"" pod="default/busybox-5b5d89c9d6-h9jx5"
	I0311 20:26:48.793838       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-h9jx5" node="ha-834040-m02"
	E0311 20:27:30.190479       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wc99r\": pod kube-proxy-wc99r is already assigned to node \"ha-834040-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wc99r" node="ha-834040-m04"
	E0311 20:27:30.195971       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wc99r\": pod kube-proxy-wc99r is already assigned to node \"ha-834040-m04\"" pod="kube-system/kube-proxy-wc99r"
	
	
	==> kubelet <==
	Mar 11 20:25:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:26:27 ha-834040 kubelet[1373]: E0311 20:26:27.613642    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:26:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:26:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:26:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:26:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:26:48 ha-834040 kubelet[1373]: I0311 20:26:48.831597    1373 topology_manager.go:215] "Topology Admit Handler" podUID="ea39821f-426d-43bf-a955-77e3a308239e" podNamespace="default" podName="busybox-5b5d89c9d6-d62cw"
	Mar 11 20:26:48 ha-834040 kubelet[1373]: W0311 20:26:48.840824    1373 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-834040" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-834040' and this object
	Mar 11 20:26:48 ha-834040 kubelet[1373]: E0311 20:26:48.840924    1373 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-834040" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-834040' and this object
	Mar 11 20:26:48 ha-834040 kubelet[1373]: I0311 20:26:48.940186    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv5r8\" (UniqueName: \"kubernetes.io/projected/ea39821f-426d-43bf-a955-77e3a308239e-kube-api-access-xv5r8\") pod \"busybox-5b5d89c9d6-d62cw\" (UID: \"ea39821f-426d-43bf-a955-77e3a308239e\") " pod="default/busybox-5b5d89c9d6-d62cw"
	Mar 11 20:27:27 ha-834040 kubelet[1373]: E0311 20:27:27.614646    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:27:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:27:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:27:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:27:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:28:27 ha-834040 kubelet[1373]: E0311 20:28:27.614520    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:28:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:28:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:28:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:28:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:29:27 ha-834040 kubelet[1373]: E0311 20:29:27.615811    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:29:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:29:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:29:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:29:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-834040 -n ha-834040
helpers_test.go:261: (dbg) Run:  kubectl --context ha-834040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (142.14s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (61.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 3 (3.194092224s)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:30:24.042615   31893 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:30:24.042758   31893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:24.042775   31893 out.go:304] Setting ErrFile to fd 2...
	I0311 20:30:24.042781   31893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:24.043061   31893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:30:24.043304   31893 out.go:298] Setting JSON to false
	I0311 20:30:24.043338   31893 mustload.go:65] Loading cluster: ha-834040
	I0311 20:30:24.043466   31893 notify.go:220] Checking for updates...
	I0311 20:30:24.043863   31893 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:30:24.043883   31893 status.go:255] checking status of ha-834040 ...
	I0311 20:30:24.044266   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:24.044297   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:24.060697   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I0311 20:30:24.061113   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:24.061663   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:24.061678   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:24.062037   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:24.062254   31893 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:30:24.063831   31893 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:30:24.063855   31893 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:24.064122   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:24.064156   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:24.078860   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0311 20:30:24.079164   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:24.079576   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:24.079592   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:24.079860   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:24.080048   31893 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:30:24.082378   31893 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:24.082793   31893 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:24.082819   31893 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:24.082925   31893 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:24.083196   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:24.083226   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:24.097424   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34391
	I0311 20:30:24.097739   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:24.098145   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:24.098165   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:24.098443   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:24.098599   31893 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:30:24.098775   31893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:24.098800   31893 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:30:24.101254   31893 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:24.101644   31893 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:24.101660   31893 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:24.101786   31893 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:30:24.101964   31893 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:30:24.102106   31893 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:30:24.102238   31893 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:30:24.181134   31893 ssh_runner.go:195] Run: systemctl --version
	I0311 20:30:24.187504   31893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:24.204196   31893 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:24.204219   31893 api_server.go:166] Checking apiserver status ...
	I0311 20:30:24.204253   31893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:24.219418   31893 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:30:24.229366   31893 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:24.229408   31893 ssh_runner.go:195] Run: ls
	I0311 20:30:24.234066   31893 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:24.238478   31893 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:24.238496   31893 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:30:24.238504   31893 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:24.238518   31893 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:30:24.238794   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:24.238827   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:24.253228   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0311 20:30:24.253595   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:24.254037   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:24.254062   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:24.254354   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:24.254519   31893 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:30:24.256112   31893 status.go:330] ha-834040-m02 host status = "Running" (err=<nil>)
	I0311 20:30:24.256128   31893 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:24.256396   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:24.256427   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:24.271087   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I0311 20:30:24.271444   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:24.271908   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:24.271927   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:24.272236   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:24.272430   31893 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:30:24.275014   31893 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:24.275467   31893 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:24.275498   31893 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:24.275668   31893 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:24.276053   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:24.276095   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:24.291092   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36265
	I0311 20:30:24.291501   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:24.291961   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:24.291980   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:24.292277   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:24.292470   31893 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:30:24.292629   31893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:24.292646   31893 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:30:24.295542   31893 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:24.295958   31893 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:24.295984   31893 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:24.296096   31893 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:30:24.296257   31893 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:30:24.296399   31893 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:30:24.296525   31893 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	W0311 20:30:26.824967   31893 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:26.825079   31893 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0311 20:30:26.825102   31893 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:26.825116   31893 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 20:30:26.825138   31893 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:26.825153   31893 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:30:26.825458   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:26.825504   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:26.840878   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0311 20:30:26.841221   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:26.841732   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:26.841752   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:26.842097   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:26.842272   31893 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:30:26.843774   31893 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:30:26.843791   31893 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:26.844476   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:26.844535   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:26.858886   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40271
	I0311 20:30:26.859207   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:26.859621   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:26.859644   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:26.859931   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:26.860105   31893 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:30:26.862655   31893 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:26.863091   31893 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:26.863115   31893 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:26.863272   31893 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:26.863568   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:26.863605   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:26.877674   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41225
	I0311 20:30:26.877995   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:26.878414   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:26.878426   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:26.878717   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:26.878926   31893 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:30:26.879107   31893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:26.879126   31893 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:30:26.881671   31893 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:26.882074   31893 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:26.882104   31893 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:26.882224   31893 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:30:26.882368   31893 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:30:26.882530   31893 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:30:26.882664   31893 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:30:26.968847   31893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:26.986134   31893 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:26.986155   31893 api_server.go:166] Checking apiserver status ...
	I0311 20:30:26.986184   31893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:27.001404   31893 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:30:27.012559   31893 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:27.012612   31893 ssh_runner.go:195] Run: ls
	I0311 20:30:27.017918   31893 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:27.022793   31893 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:27.022817   31893 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:30:27.022828   31893 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:27.022841   31893 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:30:27.023091   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:27.023126   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:27.037469   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0311 20:30:27.037886   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:27.038323   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:27.038344   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:27.038685   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:27.038848   31893 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:30:27.040342   31893 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:30:27.040365   31893 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:27.040644   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:27.040686   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:27.056028   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0311 20:30:27.056396   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:27.056843   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:27.056869   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:27.057129   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:27.057290   31893 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:30:27.059677   31893 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:27.060076   31893 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:27.060103   31893 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:27.060205   31893 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:27.060464   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:27.060494   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:27.074833   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32825
	I0311 20:30:27.075196   31893 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:27.075625   31893 main.go:141] libmachine: Using API Version  1
	I0311 20:30:27.075644   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:27.075930   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:27.076145   31893 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:30:27.076338   31893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:27.076362   31893 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:30:27.078830   31893 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:27.079222   31893 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:27.079249   31893 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:27.079413   31893 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:30:27.079566   31893 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:30:27.079693   31893 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:30:27.079825   31893 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:30:27.164912   31893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:27.182254   31893 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 3 (5.144791694s)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:30:28.238438   31977 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:30:28.238699   31977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:28.238709   31977 out.go:304] Setting ErrFile to fd 2...
	I0311 20:30:28.238715   31977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:28.238908   31977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:30:28.239087   31977 out.go:298] Setting JSON to false
	I0311 20:30:28.239117   31977 mustload.go:65] Loading cluster: ha-834040
	I0311 20:30:28.239228   31977 notify.go:220] Checking for updates...
	I0311 20:30:28.239566   31977 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:30:28.239582   31977 status.go:255] checking status of ha-834040 ...
	I0311 20:30:28.240090   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:28.240141   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:28.257252   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43421
	I0311 20:30:28.257698   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:28.258356   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:28.258395   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:28.258692   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:28.258864   31977 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:30:28.260373   31977 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:30:28.260398   31977 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:28.260635   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:28.260668   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:28.276916   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38677
	I0311 20:30:28.277303   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:28.277752   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:28.277774   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:28.278622   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:28.278810   31977 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:30:28.281910   31977 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:28.282609   31977 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:28.282639   31977 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:28.282778   31977 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:28.283044   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:28.283082   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:28.296988   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38077
	I0311 20:30:28.297436   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:28.297910   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:28.297946   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:28.298205   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:28.298364   31977 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:30:28.298545   31977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:28.298562   31977 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:30:28.300920   31977 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:28.301297   31977 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:28.301352   31977 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:28.301461   31977 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:30:28.301614   31977 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:30:28.301770   31977 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:30:28.301908   31977 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:30:28.382385   31977 ssh_runner.go:195] Run: systemctl --version
	I0311 20:30:28.391848   31977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:28.407965   31977 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:28.407990   31977 api_server.go:166] Checking apiserver status ...
	I0311 20:30:28.408032   31977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:28.425047   31977 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:30:28.437414   31977 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:28.437466   31977 ssh_runner.go:195] Run: ls
	I0311 20:30:28.442556   31977 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:28.454275   31977 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:28.454298   31977 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:30:28.454307   31977 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:28.454323   31977 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:30:28.454590   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:28.454621   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:28.470541   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0311 20:30:28.470907   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:28.471355   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:28.471377   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:28.471665   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:28.471839   31977 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:30:28.473114   31977 status.go:330] ha-834040-m02 host status = "Running" (err=<nil>)
	I0311 20:30:28.473131   31977 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:28.473424   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:28.473460   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:28.489143   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
	I0311 20:30:28.489482   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:28.489896   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:28.489919   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:28.490202   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:28.490370   31977 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:30:28.493122   31977 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:28.493510   31977 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:28.493535   31977 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:28.493659   31977 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:28.493935   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:28.493967   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:28.507195   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0311 20:30:28.507524   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:28.507921   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:28.507943   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:28.508252   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:28.508423   31977 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:30:28.508575   31977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:28.508590   31977 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:30:28.511040   31977 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:28.511431   31977 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:28.511465   31977 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:28.511598   31977 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:30:28.511769   31977 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:30:28.511915   31977 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:30:28.512056   31977 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	W0311 20:30:29.897113   31977 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:29.897190   31977 retry.go:31] will retry after 290.682092ms: dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:32.969031   31977 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:32.969115   31977 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0311 20:30:32.969144   31977 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:32.969152   31977 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 20:30:32.969168   31977 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:32.969176   31977 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:30:32.969538   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:32.969583   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:32.984170   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45061
	I0311 20:30:32.984630   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:32.985099   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:32.985120   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:32.985437   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:32.985608   31977 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:30:32.987049   31977 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:30:32.987065   31977 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:32.987783   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:32.987844   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:33.002512   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I0311 20:30:33.002878   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:33.003328   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:33.003353   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:33.003665   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:33.003855   31977 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:30:33.006486   31977 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:33.006923   31977 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:33.006949   31977 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:33.007067   31977 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:33.007441   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:33.007480   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:33.021505   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0311 20:30:33.021893   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:33.022328   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:33.022346   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:33.022632   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:33.022806   31977 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:30:33.022991   31977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:33.023010   31977 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:30:33.025493   31977 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:33.025934   31977 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:33.025960   31977 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:33.026073   31977 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:30:33.026230   31977 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:30:33.026373   31977 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:30:33.026496   31977 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:30:33.118415   31977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:33.133449   31977 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:33.133473   31977 api_server.go:166] Checking apiserver status ...
	I0311 20:30:33.133504   31977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:33.147667   31977 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:30:33.160629   31977 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:33.160672   31977 ssh_runner.go:195] Run: ls
	I0311 20:30:33.166127   31977 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:33.170571   31977 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:33.170591   31977 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:30:33.170600   31977 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:33.170613   31977 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:30:33.170948   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:33.170983   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:33.185459   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41955
	I0311 20:30:33.185905   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:33.186439   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:33.186462   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:33.186789   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:33.186973   31977 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:30:33.188558   31977 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:30:33.188576   31977 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:33.188900   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:33.188932   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:33.203542   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I0311 20:30:33.203892   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:33.204327   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:33.204360   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:33.204669   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:33.204894   31977 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:30:33.207323   31977 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:33.207700   31977 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:33.207721   31977 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:33.207887   31977 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:33.208150   31977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:33.208178   31977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:33.223026   31977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0311 20:30:33.223414   31977 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:33.223891   31977 main.go:141] libmachine: Using API Version  1
	I0311 20:30:33.223916   31977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:33.224241   31977 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:33.224417   31977 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:30:33.224601   31977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:33.224623   31977 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:30:33.227120   31977 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:33.227556   31977 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:33.227579   31977 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:33.227723   31977 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:30:33.227870   31977 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:30:33.227991   31977 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:30:33.228129   31977 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:30:33.312789   31977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:33.329538   31977 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 3 (4.397029603s)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:30:35.292659   32085 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:30:35.292818   32085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:35.292831   32085 out.go:304] Setting ErrFile to fd 2...
	I0311 20:30:35.292838   32085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:35.293106   32085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:30:35.293353   32085 out.go:298] Setting JSON to false
	I0311 20:30:35.293390   32085 mustload.go:65] Loading cluster: ha-834040
	I0311 20:30:35.293512   32085 notify.go:220] Checking for updates...
	I0311 20:30:35.293781   32085 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:30:35.293802   32085 status.go:255] checking status of ha-834040 ...
	I0311 20:30:35.294212   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:35.294285   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:35.309464   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I0311 20:30:35.309827   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:35.310397   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:35.310420   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:35.310730   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:35.310932   32085 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:30:35.312401   32085 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:30:35.312424   32085 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:35.312668   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:35.312697   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:35.327273   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I0311 20:30:35.327638   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:35.328039   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:35.328061   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:35.328388   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:35.328580   32085 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:30:35.331209   32085 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:35.331625   32085 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:35.331660   32085 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:35.331817   32085 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:35.332082   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:35.332125   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:35.346498   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I0311 20:30:35.346908   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:35.347318   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:35.347341   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:35.347622   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:35.347805   32085 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:30:35.347992   32085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:35.348025   32085 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:30:35.350442   32085 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:35.350844   32085 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:35.350857   32085 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:35.351227   32085 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:30:35.351458   32085 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:30:35.351604   32085 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:30:35.351713   32085 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:30:35.437601   32085 ssh_runner.go:195] Run: systemctl --version
	I0311 20:30:35.444392   32085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:35.461034   32085 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:35.461057   32085 api_server.go:166] Checking apiserver status ...
	I0311 20:30:35.461091   32085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:35.482813   32085 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:30:35.493792   32085 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:35.493849   32085 ssh_runner.go:195] Run: ls
	I0311 20:30:35.499496   32085 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:35.506617   32085 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:35.506635   32085 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:30:35.506644   32085 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:35.506658   32085 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:30:35.506955   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:35.506988   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:35.522945   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0311 20:30:35.523361   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:35.523814   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:35.523850   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:35.524138   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:35.524322   32085 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:30:35.525844   32085 status.go:330] ha-834040-m02 host status = "Running" (err=<nil>)
	I0311 20:30:35.525858   32085 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:35.526158   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:35.526195   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:35.540783   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0311 20:30:35.541154   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:35.541614   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:35.541638   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:35.541948   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:35.542150   32085 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:30:35.544701   32085 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:35.545168   32085 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:35.545192   32085 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:35.545331   32085 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:35.545616   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:35.545654   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:35.560256   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0311 20:30:35.560603   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:35.561069   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:35.561088   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:35.561352   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:35.561543   32085 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:30:35.561722   32085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:35.561745   32085 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:30:35.564464   32085 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:35.564890   32085 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:35.564918   32085 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:35.565065   32085 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:30:35.565199   32085 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:30:35.565338   32085 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:30:35.565490   32085 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	W0311 20:30:36.040967   32085 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:36.041006   32085 retry.go:31] will retry after 158.323572ms: dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:39.273022   32085 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:39.273137   32085 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0311 20:30:39.273164   32085 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:39.273175   32085 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 20:30:39.273197   32085 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:39.273204   32085 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:30:39.273582   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:39.273625   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:39.288472   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I0311 20:30:39.288942   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:39.289406   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:39.289430   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:39.289739   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:39.289935   32085 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:30:39.291334   32085 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:30:39.291352   32085 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:39.291676   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:39.291729   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:39.305451   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45691
	I0311 20:30:39.305763   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:39.306225   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:39.306248   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:39.306557   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:39.306731   32085 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:30:39.309641   32085 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:39.310046   32085 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:39.310074   32085 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:39.310196   32085 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:39.310467   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:39.310497   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:39.324467   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43179
	I0311 20:30:39.324896   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:39.325440   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:39.325459   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:39.325735   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:39.325906   32085 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:30:39.326057   32085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:39.326081   32085 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:30:39.328565   32085 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:39.328982   32085 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:39.329008   32085 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:39.329108   32085 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:30:39.329261   32085 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:30:39.329419   32085 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:30:39.329535   32085 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:30:39.417992   32085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:39.440688   32085 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:39.440715   32085 api_server.go:166] Checking apiserver status ...
	I0311 20:30:39.440771   32085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:39.454857   32085 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:30:39.464391   32085 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:39.464445   32085 ssh_runner.go:195] Run: ls
	I0311 20:30:39.469770   32085 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:39.474324   32085 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:39.474343   32085 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:30:39.474353   32085 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:39.474371   32085 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:30:39.474639   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:39.474677   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:39.489162   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0311 20:30:39.489667   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:39.490202   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:39.490219   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:39.490558   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:39.490778   32085 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:30:39.492335   32085 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:30:39.492351   32085 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:39.492679   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:39.492724   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:39.508911   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0311 20:30:39.509424   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:39.509913   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:39.509934   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:39.510271   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:39.510488   32085 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:30:39.513404   32085 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:39.513858   32085 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:39.513883   32085 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:39.514018   32085 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:39.514337   32085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:39.514374   32085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:39.528465   32085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39485
	I0311 20:30:39.528795   32085 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:39.529170   32085 main.go:141] libmachine: Using API Version  1
	I0311 20:30:39.529184   32085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:39.529438   32085 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:39.529597   32085 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:30:39.529767   32085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:39.529780   32085 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:30:39.532244   32085 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:39.532655   32085 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:39.532674   32085 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:39.532852   32085 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:30:39.533068   32085 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:30:39.533222   32085 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:30:39.533363   32085 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:30:39.617143   32085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:39.632888   32085 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 3 (4.760998798s)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:30:41.333753   32181 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:30:41.333863   32181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:41.333873   32181 out.go:304] Setting ErrFile to fd 2...
	I0311 20:30:41.333877   32181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:41.334049   32181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:30:41.334217   32181 out.go:298] Setting JSON to false
	I0311 20:30:41.334246   32181 mustload.go:65] Loading cluster: ha-834040
	I0311 20:30:41.334298   32181 notify.go:220] Checking for updates...
	I0311 20:30:41.334587   32181 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:30:41.334600   32181 status.go:255] checking status of ha-834040 ...
	I0311 20:30:41.335077   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:41.335153   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:41.349962   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33439
	I0311 20:30:41.350386   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:41.350877   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:41.350900   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:41.351266   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:41.351483   32181 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:30:41.353039   32181 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:30:41.353058   32181 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:41.353346   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:41.353401   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:41.367355   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0311 20:30:41.367675   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:41.368085   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:41.368106   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:41.368461   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:41.368636   32181 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:30:41.371290   32181 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:41.371769   32181 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:41.371793   32181 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:41.371948   32181 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:41.372264   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:41.372312   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:41.386518   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46663
	I0311 20:30:41.386833   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:41.387212   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:41.387230   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:41.387511   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:41.387687   32181 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:30:41.387865   32181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:41.387888   32181 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:30:41.390491   32181 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:41.390914   32181 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:41.390945   32181 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:41.391083   32181 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:30:41.391253   32181 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:30:41.391404   32181 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:30:41.391524   32181 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:30:41.468877   32181 ssh_runner.go:195] Run: systemctl --version
	I0311 20:30:41.476358   32181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:41.492468   32181 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:41.492492   32181 api_server.go:166] Checking apiserver status ...
	I0311 20:30:41.492522   32181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:41.515932   32181 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:30:41.527463   32181 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:41.527514   32181 ssh_runner.go:195] Run: ls
	I0311 20:30:41.532369   32181 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:41.543673   32181 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:41.543699   32181 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:30:41.543713   32181 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:41.543736   32181 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:30:41.544119   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:41.544155   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:41.559034   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0311 20:30:41.559549   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:41.560034   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:41.560057   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:41.560392   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:41.560559   32181 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:30:41.562042   32181 status.go:330] ha-834040-m02 host status = "Running" (err=<nil>)
	I0311 20:30:41.562058   32181 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:41.562440   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:41.562497   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:41.577716   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41465
	I0311 20:30:41.578116   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:41.578615   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:41.578646   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:41.578974   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:41.579205   32181 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:30:41.582252   32181 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:41.582786   32181 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:41.582815   32181 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:41.582956   32181 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:41.583340   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:41.583384   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:41.598266   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0311 20:30:41.598616   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:41.599040   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:41.599062   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:41.599441   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:41.599621   32181 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:30:41.599814   32181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:41.599836   32181 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:30:41.602539   32181 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:41.603030   32181 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:41.603053   32181 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:41.603209   32181 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:30:41.603380   32181 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:30:41.603552   32181 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:30:41.603712   32181 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	W0311 20:30:42.344909   32181 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:42.344951   32181 retry.go:31] will retry after 249.73504ms: dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:45.672965   32181 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:45.673038   32181 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0311 20:30:45.673055   32181 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:45.673064   32181 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 20:30:45.673086   32181 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:45.673093   32181 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:30:45.673401   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:45.673438   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:45.689686   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0311 20:30:45.690062   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:45.690507   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:45.690532   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:45.690858   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:45.691044   32181 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:30:45.692644   32181 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:30:45.692659   32181 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:45.693025   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:45.693070   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:45.708325   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I0311 20:30:45.708695   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:45.709267   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:45.709289   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:45.709611   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:45.709800   32181 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:30:45.712574   32181 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:45.712988   32181 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:45.713008   32181 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:45.713130   32181 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:45.713411   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:45.713445   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:45.728441   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I0311 20:30:45.728805   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:45.729272   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:45.729296   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:45.729572   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:45.729734   32181 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:30:45.729905   32181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:45.729924   32181 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:30:45.732117   32181 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:45.732547   32181 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:45.732575   32181 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:45.732684   32181 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:30:45.732868   32181 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:30:45.733039   32181 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:30:45.733147   32181 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:30:45.825763   32181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:45.843431   32181 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:45.843463   32181 api_server.go:166] Checking apiserver status ...
	I0311 20:30:45.843516   32181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:45.859961   32181 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:30:45.870612   32181 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:45.870649   32181 ssh_runner.go:195] Run: ls
	I0311 20:30:45.875422   32181 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:45.880271   32181 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:45.880292   32181 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:30:45.880299   32181 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:45.880311   32181 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:30:45.880556   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:45.880584   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:45.895043   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I0311 20:30:45.895388   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:45.895839   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:45.895863   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:45.896178   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:45.896355   32181 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:30:45.897831   32181 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:30:45.897848   32181 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:45.898091   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:45.898129   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:45.911672   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0311 20:30:45.912014   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:45.912448   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:45.912467   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:45.912786   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:45.912977   32181 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:30:45.915703   32181 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:45.916067   32181 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:45.916095   32181 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:45.916231   32181 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:45.916522   32181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:45.916564   32181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:45.932850   32181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39471
	I0311 20:30:45.933231   32181 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:45.933680   32181 main.go:141] libmachine: Using API Version  1
	I0311 20:30:45.933701   32181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:45.934033   32181 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:45.934209   32181 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:30:45.934379   32181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:45.934403   32181 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:30:45.937412   32181 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:45.937885   32181 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:45.937910   32181 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:45.938058   32181 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:30:45.938212   32181 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:30:45.938346   32181 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:30:45.938475   32181 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:30:46.021139   32181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:46.037137   32181 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 3 (3.756902325s)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:30:50.450765   32287 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:30:50.450967   32287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:50.450976   32287 out.go:304] Setting ErrFile to fd 2...
	I0311 20:30:50.450981   32287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:50.451129   32287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:30:50.451285   32287 out.go:298] Setting JSON to false
	I0311 20:30:50.451314   32287 mustload.go:65] Loading cluster: ha-834040
	I0311 20:30:50.451421   32287 notify.go:220] Checking for updates...
	I0311 20:30:50.451665   32287 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:30:50.451677   32287 status.go:255] checking status of ha-834040 ...
	I0311 20:30:50.452028   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:50.452079   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:50.470214   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0311 20:30:50.470576   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:50.471267   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:50.471292   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:50.471589   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:50.471784   32287 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:30:50.473401   32287 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:30:50.473420   32287 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:50.473701   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:50.473738   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:50.487678   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I0311 20:30:50.488103   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:50.488593   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:50.488619   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:50.488945   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:50.489125   32287 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:30:50.491566   32287 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:50.492044   32287 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:50.492079   32287 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:50.492240   32287 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:50.492627   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:50.492669   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:50.506730   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35917
	I0311 20:30:50.507136   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:50.507560   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:50.507578   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:50.507836   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:50.507995   32287 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:30:50.508180   32287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:50.508204   32287 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:30:50.511132   32287 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:50.511549   32287 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:50.511580   32287 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:50.511693   32287 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:30:50.511862   32287 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:30:50.512003   32287 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:30:50.512141   32287 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:30:50.593423   32287 ssh_runner.go:195] Run: systemctl --version
	I0311 20:30:50.602196   32287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:50.618870   32287 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:50.618900   32287 api_server.go:166] Checking apiserver status ...
	I0311 20:30:50.618939   32287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:50.638158   32287 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:30:50.649187   32287 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:50.649225   32287 ssh_runner.go:195] Run: ls
	I0311 20:30:50.654737   32287 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:50.661417   32287 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:50.661434   32287 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:30:50.661443   32287 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:50.661465   32287 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:30:50.661741   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:50.661799   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:50.677671   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0311 20:30:50.678071   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:50.678510   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:50.678536   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:50.678850   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:50.678971   32287 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:30:50.680413   32287 status.go:330] ha-834040-m02 host status = "Running" (err=<nil>)
	I0311 20:30:50.680429   32287 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:50.680836   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:50.680874   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:50.695736   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39515
	I0311 20:30:50.696189   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:50.696579   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:50.696599   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:50.696929   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:50.697091   32287 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:30:50.699346   32287 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:50.699821   32287 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:50.699852   32287 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:50.699993   32287 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:50.700392   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:50.700436   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:50.718439   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39687
	I0311 20:30:50.718808   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:50.719324   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:50.719347   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:50.719682   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:50.719881   32287 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:30:50.720060   32287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:50.720079   32287 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:30:50.723306   32287 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:50.723813   32287 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:50.723847   32287 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:50.724017   32287 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:30:50.724184   32287 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:30:50.724360   32287 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:30:50.724513   32287 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	W0311 20:30:53.801024   32287 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:30:53.801091   32287 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0311 20:30:53.801105   32287 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:53.801114   32287 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 20:30:53.801130   32287 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:30:53.801140   32287 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:30:53.801413   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:53.801448   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:53.816175   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0311 20:30:53.816790   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:53.817317   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:53.817346   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:53.817645   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:53.817822   32287 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:30:53.819207   32287 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:30:53.819221   32287 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:53.819493   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:53.819523   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:53.833352   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0311 20:30:53.833698   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:53.834120   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:53.834136   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:53.834499   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:53.834675   32287 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:30:53.837329   32287 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:53.837681   32287 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:53.837708   32287 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:53.837846   32287 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:30:53.838697   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:53.838737   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:53.854349   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0311 20:30:53.854752   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:53.855114   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:53.855135   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:53.855479   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:53.855636   32287 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:30:53.855802   32287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:53.855819   32287 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:30:53.858458   32287 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:53.858891   32287 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:30:53.858919   32287 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:30:53.859028   32287 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:30:53.859178   32287 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:30:53.859281   32287 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:30:53.859367   32287 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:30:53.945768   32287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:53.961483   32287 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:53.961509   32287 api_server.go:166] Checking apiserver status ...
	I0311 20:30:53.961551   32287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:53.977467   32287 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:30:53.987241   32287 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:53.987289   32287 ssh_runner.go:195] Run: ls
	I0311 20:30:53.992069   32287 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:53.996986   32287 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:53.997006   32287 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:30:53.997013   32287 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:53.997028   32287 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:30:53.997322   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:53.997357   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:54.013052   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0311 20:30:54.013470   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:54.013885   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:54.013906   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:54.014181   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:54.014382   32287 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:30:54.015828   32287 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:30:54.015844   32287 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:54.016145   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:54.016205   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:54.029916   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
	I0311 20:30:54.030274   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:54.030685   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:54.030704   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:54.030986   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:54.031152   32287 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:30:54.033604   32287 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:54.033995   32287 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:54.034022   32287 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:54.034162   32287 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:30:54.034421   32287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:54.034450   32287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:54.047763   32287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43661
	I0311 20:30:54.048131   32287 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:54.048539   32287 main.go:141] libmachine: Using API Version  1
	I0311 20:30:54.048556   32287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:54.048863   32287 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:54.049022   32287 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:30:54.049207   32287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:54.049225   32287 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:30:54.051607   32287 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:54.051962   32287 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:30:54.051989   32287 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:30:54.052124   32287 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:30:54.052303   32287 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:30:54.052455   32287 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:30:54.052574   32287 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:30:54.137350   32287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:54.154161   32287 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 3 (3.746325211s)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:30:59.458546   32395 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:30:59.458650   32395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:59.458659   32395 out.go:304] Setting ErrFile to fd 2...
	I0311 20:30:59.458663   32395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:30:59.458873   32395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:30:59.459040   32395 out.go:298] Setting JSON to false
	I0311 20:30:59.459070   32395 mustload.go:65] Loading cluster: ha-834040
	I0311 20:30:59.459200   32395 notify.go:220] Checking for updates...
	I0311 20:30:59.459569   32395 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:30:59.459588   32395 status.go:255] checking status of ha-834040 ...
	I0311 20:30:59.460005   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:59.460078   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:59.474724   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0311 20:30:59.475108   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:59.475738   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:30:59.475772   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:59.476094   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:59.476275   32395 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:30:59.478347   32395 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:30:59.478368   32395 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:59.478738   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:59.478775   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:59.492999   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0311 20:30:59.493394   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:59.493884   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:30:59.493914   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:59.494185   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:59.494354   32395 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:30:59.496988   32395 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:59.497359   32395 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:59.497395   32395 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:59.497505   32395 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:30:59.497881   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:59.497923   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:59.511644   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0311 20:30:59.511984   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:59.512463   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:30:59.512487   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:59.512784   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:59.512948   32395 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:30:59.513113   32395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:59.513143   32395 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:30:59.515374   32395 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:59.515773   32395 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:30:59.515798   32395 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:30:59.515915   32395 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:30:59.516084   32395 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:30:59.516247   32395 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:30:59.516388   32395 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:30:59.598376   32395 ssh_runner.go:195] Run: systemctl --version
	I0311 20:30:59.609192   32395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:30:59.626364   32395 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:30:59.626389   32395 api_server.go:166] Checking apiserver status ...
	I0311 20:30:59.626424   32395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:30:59.641253   32395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:30:59.653561   32395 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:30:59.653609   32395 ssh_runner.go:195] Run: ls
	I0311 20:30:59.660448   32395 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:30:59.669822   32395 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:30:59.669845   32395 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:30:59.669883   32395 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:30:59.669910   32395 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:30:59.670303   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:59.670352   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:59.685547   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0311 20:30:59.685935   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:59.686411   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:30:59.686436   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:59.686753   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:59.686956   32395 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:30:59.688563   32395 status.go:330] ha-834040-m02 host status = "Running" (err=<nil>)
	I0311 20:30:59.688577   32395 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:59.688980   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:59.689021   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:59.705280   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0311 20:30:59.705664   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:59.706087   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:30:59.706106   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:59.706481   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:59.706703   32395 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:30:59.709120   32395 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:59.709531   32395 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:59.709560   32395 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:59.709719   32395 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:30:59.710044   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:30:59.710084   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:30:59.723804   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36909
	I0311 20:30:59.724196   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:30:59.724613   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:30:59.724632   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:30:59.724962   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:30:59.725169   32395 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:30:59.725345   32395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:30:59.725367   32395 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:30:59.727933   32395 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:59.728389   32395 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:30:59.728415   32395 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:30:59.728582   32395 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:30:59.728754   32395 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:30:59.728898   32395 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:30:59.729011   32395 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	W0311 20:31:02.792988   32395 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.101:22: connect: no route to host
	W0311 20:31:02.793074   32395 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	E0311 20:31:02.793096   32395 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:31:02.793110   32395 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 20:31:02.793134   32395 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	I0311 20:31:02.793145   32395 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:31:02.793575   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:02.793625   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:02.809947   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34501
	I0311 20:31:02.810402   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:02.810973   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:31:02.810995   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:02.811327   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:02.811496   32395 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:31:02.813085   32395 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:31:02.813104   32395 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:31:02.813427   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:02.813505   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:02.828345   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I0311 20:31:02.828753   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:02.829170   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:31:02.829193   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:02.829485   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:02.829688   32395 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:31:02.832480   32395 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:02.832946   32395 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:31:02.832970   32395 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:02.833144   32395 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:31:02.833522   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:02.833565   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:02.846991   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35635
	I0311 20:31:02.847303   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:02.847675   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:31:02.847694   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:02.847986   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:02.848137   32395 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:31:02.848288   32395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:31:02.848309   32395 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:31:02.850769   32395 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:02.851092   32395 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:31:02.851111   32395 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:02.851254   32395 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:31:02.851448   32395 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:31:02.851602   32395 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:31:02.851722   32395 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:31:02.938711   32395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:31:02.954291   32395 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:31:02.954318   32395 api_server.go:166] Checking apiserver status ...
	I0311 20:31:02.954352   32395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:31:02.968699   32395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:31:02.979971   32395 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:31:02.980015   32395 ssh_runner.go:195] Run: ls
	I0311 20:31:02.985011   32395 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:31:02.989559   32395 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:31:02.989580   32395 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:31:02.989587   32395 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:31:02.989605   32395 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:31:02.989945   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:02.989987   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:03.004415   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43793
	I0311 20:31:03.004845   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:03.005297   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:31:03.005311   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:03.005562   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:03.005717   32395 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:31:03.007104   32395 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:31:03.007120   32395 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:31:03.007484   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:03.007523   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:03.022554   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
	I0311 20:31:03.022924   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:03.023415   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:31:03.023439   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:03.023745   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:03.023939   32395 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:31:03.026521   32395 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:03.026938   32395 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:31:03.026970   32395 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:03.027059   32395 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:31:03.027434   32395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:03.027470   32395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:03.042076   32395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0311 20:31:03.042466   32395 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:03.042917   32395 main.go:141] libmachine: Using API Version  1
	I0311 20:31:03.042941   32395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:03.043214   32395 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:03.043395   32395 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:31:03.043584   32395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:31:03.043618   32395 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:31:03.046223   32395 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:03.046606   32395 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:31:03.046639   32395 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:03.046779   32395 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:31:03.046934   32395 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:31:03.047062   32395 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:31:03.047208   32395 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:31:03.132857   32395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:31:03.149578   32395 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 7 (636.029029ms)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:31:13.192093   32522 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:31:13.192262   32522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:31:13.192274   32522 out.go:304] Setting ErrFile to fd 2...
	I0311 20:31:13.192281   32522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:31:13.192560   32522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:31:13.192819   32522 out.go:298] Setting JSON to false
	I0311 20:31:13.192857   32522 mustload.go:65] Loading cluster: ha-834040
	I0311 20:31:13.192899   32522 notify.go:220] Checking for updates...
	I0311 20:31:13.193383   32522 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:31:13.193404   32522 status.go:255] checking status of ha-834040 ...
	I0311 20:31:13.193996   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.194067   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.209928   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0311 20:31:13.210300   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.210828   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.210853   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.211245   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.211430   32522 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:31:13.213064   32522 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:31:13.213108   32522 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:31:13.213448   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.213480   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.227140   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0311 20:31:13.227470   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.227850   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.227879   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.228182   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.228353   32522 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:31:13.231202   32522 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:31:13.231634   32522 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:31:13.231660   32522 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:31:13.231797   32522 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:31:13.232064   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.232102   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.245638   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34983
	I0311 20:31:13.245979   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.246362   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.246383   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.246911   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.247055   32522 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:31:13.247242   32522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:31:13.247282   32522 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:31:13.249762   32522 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:31:13.250200   32522 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:31:13.250237   32522 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:31:13.250383   32522 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:31:13.250548   32522 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:31:13.250688   32522 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:31:13.250848   32522 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:31:13.333576   32522 ssh_runner.go:195] Run: systemctl --version
	I0311 20:31:13.340210   32522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:31:13.357115   32522 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:31:13.357137   32522 api_server.go:166] Checking apiserver status ...
	I0311 20:31:13.357163   32522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:31:13.372464   32522 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:31:13.385123   32522 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:31:13.385171   32522 ssh_runner.go:195] Run: ls
	I0311 20:31:13.389883   32522 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:31:13.396183   32522 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:31:13.396202   32522 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:31:13.396222   32522 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:31:13.396239   32522 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:31:13.396498   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.396528   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.410791   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33929
	I0311 20:31:13.411240   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.411653   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.411674   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.411960   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.412119   32522 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:31:13.413609   32522 status.go:330] ha-834040-m02 host status = "Stopped" (err=<nil>)
	I0311 20:31:13.413623   32522 status.go:343] host is not running, skipping remaining checks
	I0311 20:31:13.413629   32522 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:31:13.413642   32522 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:31:13.413903   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.413940   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.427396   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44395
	I0311 20:31:13.427735   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.428100   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.428119   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.428438   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.428588   32522 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:31:13.429868   32522 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:31:13.429883   32522 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:31:13.430136   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.430164   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.443734   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I0311 20:31:13.444084   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.444493   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.444512   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.444826   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.445015   32522 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:31:13.447415   32522 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:13.447843   32522 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:31:13.447877   32522 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:13.448003   32522 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:31:13.448277   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.448312   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.462715   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0311 20:31:13.463052   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.463511   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.463548   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.463865   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.464108   32522 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:31:13.464323   32522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:31:13.464344   32522 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:31:13.466786   32522 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:13.467144   32522 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:31:13.467170   32522 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:13.467279   32522 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:31:13.467464   32522 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:31:13.467601   32522 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:31:13.467730   32522 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:31:13.553844   32522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:31:13.569585   32522 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:31:13.569607   32522 api_server.go:166] Checking apiserver status ...
	I0311 20:31:13.569634   32522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:31:13.588231   32522 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:31:13.600889   32522 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:31:13.600971   32522 ssh_runner.go:195] Run: ls
	I0311 20:31:13.605848   32522 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:31:13.610280   32522 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:31:13.610297   32522 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:31:13.610304   32522 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:31:13.610318   32522 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:31:13.610639   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.610682   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.624854   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44909
	I0311 20:31:13.625295   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.625803   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.625835   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.626116   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.626252   32522 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:31:13.627592   32522 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:31:13.627607   32522 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:31:13.627986   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.628028   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.644566   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45867
	I0311 20:31:13.644934   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.645349   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.645369   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.645656   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.645823   32522 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:31:13.648448   32522 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:13.648834   32522 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:31:13.648876   32522 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:13.648993   32522 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:31:13.649252   32522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:13.649295   32522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:13.663397   32522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43179
	I0311 20:31:13.663727   32522 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:13.664173   32522 main.go:141] libmachine: Using API Version  1
	I0311 20:31:13.664191   32522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:13.664471   32522 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:13.664653   32522 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:31:13.664855   32522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:31:13.664880   32522 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:31:13.667752   32522 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:13.668224   32522 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:31:13.668254   32522 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:13.668399   32522 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:31:13.668571   32522 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:31:13.668754   32522 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:31:13.668914   32522 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:31:13.755242   32522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:31:13.773120   32522 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 7 (645.416937ms)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-834040-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:31:21.996065   32616 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:31:21.996194   32616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:31:21.996205   32616 out.go:304] Setting ErrFile to fd 2...
	I0311 20:31:21.996212   32616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:31:21.996481   32616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:31:21.996656   32616 out.go:298] Setting JSON to false
	I0311 20:31:21.996682   32616 mustload.go:65] Loading cluster: ha-834040
	I0311 20:31:21.996806   32616 notify.go:220] Checking for updates...
	I0311 20:31:21.997044   32616 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:31:21.997057   32616 status.go:255] checking status of ha-834040 ...
	I0311 20:31:21.997387   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:21.997444   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.012192   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I0311 20:31:22.012544   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.013156   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.013186   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.013614   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.013829   32616 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:31:22.015433   32616 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:31:22.015453   32616 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:31:22.015699   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.015733   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.029804   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0311 20:31:22.030180   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.030570   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.030592   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.030949   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.031123   32616 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:31:22.033658   32616 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:31:22.034148   32616 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:31:22.034174   32616 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:31:22.034301   32616 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:31:22.034637   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.034677   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.048576   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0311 20:31:22.048943   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.049391   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.049422   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.049720   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.049906   32616 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:31:22.050088   32616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:31:22.050115   32616 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:31:22.052791   32616 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:31:22.053107   32616 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:31:22.053138   32616 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:31:22.053241   32616 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:31:22.053394   32616 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:31:22.053534   32616 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:31:22.053694   32616 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:31:22.133202   32616 ssh_runner.go:195] Run: systemctl --version
	I0311 20:31:22.140084   32616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:31:22.159669   32616 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:31:22.159690   32616 api_server.go:166] Checking apiserver status ...
	I0311 20:31:22.159722   32616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:31:22.175067   32616 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0311 20:31:22.188064   32616 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:31:22.188117   32616 ssh_runner.go:195] Run: ls
	I0311 20:31:22.193171   32616 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:31:22.200601   32616 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:31:22.200621   32616 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:31:22.200631   32616 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:31:22.200651   32616 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:31:22.201096   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.201138   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.217143   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45611
	I0311 20:31:22.217505   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.217891   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.217913   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.218287   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.218467   32616 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:31:22.220053   32616 status.go:330] ha-834040-m02 host status = "Stopped" (err=<nil>)
	I0311 20:31:22.220070   32616 status.go:343] host is not running, skipping remaining checks
	I0311 20:31:22.220077   32616 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:31:22.220096   32616 status.go:255] checking status of ha-834040-m03 ...
	I0311 20:31:22.220487   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.220523   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.234440   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0311 20:31:22.234828   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.235263   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.235287   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.235581   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.235746   32616 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:31:22.237171   32616 status.go:330] ha-834040-m03 host status = "Running" (err=<nil>)
	I0311 20:31:22.237184   32616 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:31:22.237506   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.237542   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.251001   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35539
	I0311 20:31:22.251441   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.251916   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.251938   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.252244   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.252444   32616 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:31:22.255222   32616 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:22.255637   32616 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:31:22.255661   32616 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:22.255835   32616 host.go:66] Checking if "ha-834040-m03" exists ...
	I0311 20:31:22.256151   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.256195   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.269977   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39005
	I0311 20:31:22.270307   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.270720   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.270739   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.271041   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.271228   32616 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:31:22.271417   32616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:31:22.271436   32616 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:31:22.274091   32616 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:22.274509   32616 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:31:22.274531   32616 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:22.274666   32616 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:31:22.274825   32616 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:31:22.274956   32616 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:31:22.275113   32616 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:31:22.366123   32616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:31:22.386109   32616 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:31:22.386135   32616 api_server.go:166] Checking apiserver status ...
	I0311 20:31:22.386173   32616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:31:22.400277   32616 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0311 20:31:22.413649   32616 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:31:22.413696   32616 ssh_runner.go:195] Run: ls
	I0311 20:31:22.418685   32616 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:31:22.423257   32616 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:31:22.423277   32616 status.go:422] ha-834040-m03 apiserver status = Running (err=<nil>)
	I0311 20:31:22.423296   32616 status.go:257] ha-834040-m03 status: &{Name:ha-834040-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:31:22.423318   32616 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:31:22.423601   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.423633   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.438592   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I0311 20:31:22.438918   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.439418   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.439436   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.439785   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.439982   32616 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:31:22.441512   32616 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:31:22.441527   32616 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:31:22.441817   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.441847   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.456481   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0311 20:31:22.456885   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.457363   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.457391   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.457734   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.457925   32616 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:31:22.460592   32616 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:22.461030   32616 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:31:22.461049   32616 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:22.461190   32616 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:31:22.461496   32616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:22.461541   32616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:22.475187   32616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0311 20:31:22.475551   32616 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:22.475986   32616 main.go:141] libmachine: Using API Version  1
	I0311 20:31:22.476008   32616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:22.476283   32616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:22.476480   32616 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:31:22.476632   32616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:31:22.476651   32616 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:31:22.478868   32616 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:22.479315   32616 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:31:22.479343   32616 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:22.479422   32616 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:31:22.479573   32616 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:31:22.479732   32616 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:31:22.479870   32616 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:31:22.565314   32616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:31:22.584403   32616 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-834040 -n ha-834040
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-834040 logs -n 25: (1.49934518s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040:/home/docker/cp-test_ha-834040-m03_ha-834040.txt                       |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040 sudo cat                                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040.txt                                 |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m02:/home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m02 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04:/home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m04 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp testdata/cp-test.txt                                                | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040:/home/docker/cp-test_ha-834040-m04_ha-834040.txt                       |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040 sudo cat                                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040.txt                                 |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m02:/home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m02 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03:/home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m03 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-834040 node stop m02 -v=7                                                     | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-834040 node start m02 -v=7                                                    | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:22:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:22:45.357118   27491 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:22:45.357232   27491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:22:45.357242   27491 out.go:304] Setting ErrFile to fd 2...
	I0311 20:22:45.357254   27491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:22:45.357457   27491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:22:45.357980   27491 out.go:298] Setting JSON to false
	I0311 20:22:45.358846   27491 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3914,"bootTime":1710184651,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:22:45.358900   27491 start.go:139] virtualization: kvm guest
	I0311 20:22:45.361360   27491 out.go:177] * [ha-834040] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:22:45.362829   27491 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:22:45.362813   27491 notify.go:220] Checking for updates...
	I0311 20:22:45.364611   27491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:22:45.365924   27491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:22:45.367155   27491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:22:45.368447   27491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:22:45.369687   27491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:22:45.371128   27491 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:22:45.404336   27491 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 20:22:45.405688   27491 start.go:297] selected driver: kvm2
	I0311 20:22:45.405707   27491 start.go:901] validating driver "kvm2" against <nil>
	I0311 20:22:45.405720   27491 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:22:45.406651   27491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:22:45.406715   27491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 20:22:45.420585   27491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 20:22:45.420628   27491 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 20:22:45.420860   27491 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:22:45.420886   27491 cni.go:84] Creating CNI manager for ""
	I0311 20:22:45.420891   27491 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0311 20:22:45.420895   27491 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 20:22:45.420942   27491 start.go:340] cluster config:
	{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0311 20:22:45.421030   27491 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:22:45.422794   27491 out.go:177] * Starting "ha-834040" primary control-plane node in "ha-834040" cluster
	I0311 20:22:45.424002   27491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:22:45.424025   27491 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 20:22:45.424036   27491 cache.go:56] Caching tarball of preloaded images
	I0311 20:22:45.424108   27491 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:22:45.424119   27491 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:22:45.424428   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:22:45.424452   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json: {Name:mk847490f58f22447c66fcb3c2cb95216eb6be6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:22:45.424565   27491 start.go:360] acquireMachinesLock for ha-834040: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:22:45.424591   27491 start.go:364] duration metric: took 14.057µs to acquireMachinesLock for "ha-834040"
	I0311 20:22:45.424606   27491 start.go:93] Provisioning new machine with config: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:22:45.424660   27491 start.go:125] createHost starting for "" (driver="kvm2")
	I0311 20:22:45.426188   27491 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 20:22:45.426292   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:22:45.426326   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:22:45.439379   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0311 20:22:45.439717   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:22:45.440227   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:22:45.440245   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:22:45.440541   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:22:45.440715   27491 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:22:45.440871   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:22:45.440997   27491 start.go:159] libmachine.API.Create for "ha-834040" (driver="kvm2")
	I0311 20:22:45.441016   27491 client.go:168] LocalClient.Create starting
	I0311 20:22:45.441039   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 20:22:45.441070   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:22:45.441088   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:22:45.441134   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 20:22:45.441151   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:22:45.441170   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:22:45.441189   27491 main.go:141] libmachine: Running pre-create checks...
	I0311 20:22:45.441198   27491 main.go:141] libmachine: (ha-834040) Calling .PreCreateCheck
	I0311 20:22:45.441496   27491 main.go:141] libmachine: (ha-834040) Calling .GetConfigRaw
	I0311 20:22:45.441803   27491 main.go:141] libmachine: Creating machine...
	I0311 20:22:45.441814   27491 main.go:141] libmachine: (ha-834040) Calling .Create
	I0311 20:22:45.441906   27491 main.go:141] libmachine: (ha-834040) Creating KVM machine...
	I0311 20:22:45.443025   27491 main.go:141] libmachine: (ha-834040) DBG | found existing default KVM network
	I0311 20:22:45.443636   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.443515   27514 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0311 20:22:45.443652   27491 main.go:141] libmachine: (ha-834040) DBG | created network xml: 
	I0311 20:22:45.443660   27491 main.go:141] libmachine: (ha-834040) DBG | <network>
	I0311 20:22:45.443667   27491 main.go:141] libmachine: (ha-834040) DBG |   <name>mk-ha-834040</name>
	I0311 20:22:45.443678   27491 main.go:141] libmachine: (ha-834040) DBG |   <dns enable='no'/>
	I0311 20:22:45.443689   27491 main.go:141] libmachine: (ha-834040) DBG |   
	I0311 20:22:45.443696   27491 main.go:141] libmachine: (ha-834040) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0311 20:22:45.443704   27491 main.go:141] libmachine: (ha-834040) DBG |     <dhcp>
	I0311 20:22:45.443714   27491 main.go:141] libmachine: (ha-834040) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0311 20:22:45.443729   27491 main.go:141] libmachine: (ha-834040) DBG |     </dhcp>
	I0311 20:22:45.443743   27491 main.go:141] libmachine: (ha-834040) DBG |   </ip>
	I0311 20:22:45.443752   27491 main.go:141] libmachine: (ha-834040) DBG |   
	I0311 20:22:45.443771   27491 main.go:141] libmachine: (ha-834040) DBG | </network>
	I0311 20:22:45.443786   27491 main.go:141] libmachine: (ha-834040) DBG | 
	I0311 20:22:45.448381   27491 main.go:141] libmachine: (ha-834040) DBG | trying to create private KVM network mk-ha-834040 192.168.39.0/24...
	I0311 20:22:45.509320   27491 main.go:141] libmachine: (ha-834040) DBG | private KVM network mk-ha-834040 192.168.39.0/24 created
	I0311 20:22:45.509382   27491 main.go:141] libmachine: (ha-834040) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040 ...
	I0311 20:22:45.509410   27491 main.go:141] libmachine: (ha-834040) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 20:22:45.509430   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.509373   27514 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:22:45.509576   27491 main.go:141] libmachine: (ha-834040) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 20:22:45.732384   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.732249   27514 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa...
	I0311 20:22:45.834319   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.834220   27514 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/ha-834040.rawdisk...
	I0311 20:22:45.834351   27491 main.go:141] libmachine: (ha-834040) DBG | Writing magic tar header
	I0311 20:22:45.834361   27491 main.go:141] libmachine: (ha-834040) DBG | Writing SSH key tar header
	I0311 20:22:45.834375   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:45.834346   27514 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040 ...
	I0311 20:22:45.834463   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040
	I0311 20:22:45.834496   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040 (perms=drwx------)
	I0311 20:22:45.834508   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 20:22:45.834528   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:22:45.834535   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 20:22:45.834543   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 20:22:45.834550   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home/jenkins
	I0311 20:22:45.834562   27491 main.go:141] libmachine: (ha-834040) DBG | Checking permissions on dir: /home
	I0311 20:22:45.834571   27491 main.go:141] libmachine: (ha-834040) DBG | Skipping /home - not owner
	I0311 20:22:45.834586   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 20:22:45.834605   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 20:22:45.834614   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 20:22:45.834623   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 20:22:45.834633   27491 main.go:141] libmachine: (ha-834040) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 20:22:45.834654   27491 main.go:141] libmachine: (ha-834040) Creating domain...
	I0311 20:22:45.835654   27491 main.go:141] libmachine: (ha-834040) define libvirt domain using xml: 
	I0311 20:22:45.835677   27491 main.go:141] libmachine: (ha-834040) <domain type='kvm'>
	I0311 20:22:45.835687   27491 main.go:141] libmachine: (ha-834040)   <name>ha-834040</name>
	I0311 20:22:45.835696   27491 main.go:141] libmachine: (ha-834040)   <memory unit='MiB'>2200</memory>
	I0311 20:22:45.835703   27491 main.go:141] libmachine: (ha-834040)   <vcpu>2</vcpu>
	I0311 20:22:45.835718   27491 main.go:141] libmachine: (ha-834040)   <features>
	I0311 20:22:45.835724   27491 main.go:141] libmachine: (ha-834040)     <acpi/>
	I0311 20:22:45.835728   27491 main.go:141] libmachine: (ha-834040)     <apic/>
	I0311 20:22:45.835733   27491 main.go:141] libmachine: (ha-834040)     <pae/>
	I0311 20:22:45.835741   27491 main.go:141] libmachine: (ha-834040)     
	I0311 20:22:45.835749   27491 main.go:141] libmachine: (ha-834040)   </features>
	I0311 20:22:45.835755   27491 main.go:141] libmachine: (ha-834040)   <cpu mode='host-passthrough'>
	I0311 20:22:45.835760   27491 main.go:141] libmachine: (ha-834040)   
	I0311 20:22:45.835764   27491 main.go:141] libmachine: (ha-834040)   </cpu>
	I0311 20:22:45.835816   27491 main.go:141] libmachine: (ha-834040)   <os>
	I0311 20:22:45.835841   27491 main.go:141] libmachine: (ha-834040)     <type>hvm</type>
	I0311 20:22:45.835848   27491 main.go:141] libmachine: (ha-834040)     <boot dev='cdrom'/>
	I0311 20:22:45.835852   27491 main.go:141] libmachine: (ha-834040)     <boot dev='hd'/>
	I0311 20:22:45.835857   27491 main.go:141] libmachine: (ha-834040)     <bootmenu enable='no'/>
	I0311 20:22:45.835861   27491 main.go:141] libmachine: (ha-834040)   </os>
	I0311 20:22:45.835866   27491 main.go:141] libmachine: (ha-834040)   <devices>
	I0311 20:22:45.835873   27491 main.go:141] libmachine: (ha-834040)     <disk type='file' device='cdrom'>
	I0311 20:22:45.835881   27491 main.go:141] libmachine: (ha-834040)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/boot2docker.iso'/>
	I0311 20:22:45.836290   27491 main.go:141] libmachine: (ha-834040)       <target dev='hdc' bus='scsi'/>
	I0311 20:22:45.836305   27491 main.go:141] libmachine: (ha-834040)       <readonly/>
	I0311 20:22:45.836318   27491 main.go:141] libmachine: (ha-834040)     </disk>
	I0311 20:22:45.836332   27491 main.go:141] libmachine: (ha-834040)     <disk type='file' device='disk'>
	I0311 20:22:45.836340   27491 main.go:141] libmachine: (ha-834040)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 20:22:45.836358   27491 main.go:141] libmachine: (ha-834040)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/ha-834040.rawdisk'/>
	I0311 20:22:45.836365   27491 main.go:141] libmachine: (ha-834040)       <target dev='hda' bus='virtio'/>
	I0311 20:22:45.836379   27491 main.go:141] libmachine: (ha-834040)     </disk>
	I0311 20:22:45.836386   27491 main.go:141] libmachine: (ha-834040)     <interface type='network'>
	I0311 20:22:45.836395   27491 main.go:141] libmachine: (ha-834040)       <source network='mk-ha-834040'/>
	I0311 20:22:45.836407   27491 main.go:141] libmachine: (ha-834040)       <model type='virtio'/>
	I0311 20:22:45.836415   27491 main.go:141] libmachine: (ha-834040)     </interface>
	I0311 20:22:45.836422   27491 main.go:141] libmachine: (ha-834040)     <interface type='network'>
	I0311 20:22:45.836436   27491 main.go:141] libmachine: (ha-834040)       <source network='default'/>
	I0311 20:22:45.836442   27491 main.go:141] libmachine: (ha-834040)       <model type='virtio'/>
	I0311 20:22:45.836455   27491 main.go:141] libmachine: (ha-834040)     </interface>
	I0311 20:22:45.836462   27491 main.go:141] libmachine: (ha-834040)     <serial type='pty'>
	I0311 20:22:45.836472   27491 main.go:141] libmachine: (ha-834040)       <target port='0'/>
	I0311 20:22:45.836478   27491 main.go:141] libmachine: (ha-834040)     </serial>
	I0311 20:22:45.836491   27491 main.go:141] libmachine: (ha-834040)     <console type='pty'>
	I0311 20:22:45.836498   27491 main.go:141] libmachine: (ha-834040)       <target type='serial' port='0'/>
	I0311 20:22:45.836513   27491 main.go:141] libmachine: (ha-834040)     </console>
	I0311 20:22:45.836520   27491 main.go:141] libmachine: (ha-834040)     <rng model='virtio'>
	I0311 20:22:45.836530   27491 main.go:141] libmachine: (ha-834040)       <backend model='random'>/dev/random</backend>
	I0311 20:22:45.836541   27491 main.go:141] libmachine: (ha-834040)     </rng>
	I0311 20:22:45.836549   27491 main.go:141] libmachine: (ha-834040)     
	I0311 20:22:45.836555   27491 main.go:141] libmachine: (ha-834040)     
	I0311 20:22:45.836576   27491 main.go:141] libmachine: (ha-834040)   </devices>
	I0311 20:22:45.836582   27491 main.go:141] libmachine: (ha-834040) </domain>
	I0311 20:22:45.836595   27491 main.go:141] libmachine: (ha-834040) 
	I0311 20:22:45.841126   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:c2:4b:c0 in network default
	I0311 20:22:45.841751   27491 main.go:141] libmachine: (ha-834040) Ensuring networks are active...
	I0311 20:22:45.841775   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:45.842479   27491 main.go:141] libmachine: (ha-834040) Ensuring network default is active
	I0311 20:22:45.842715   27491 main.go:141] libmachine: (ha-834040) Ensuring network mk-ha-834040 is active
	I0311 20:22:45.843152   27491 main.go:141] libmachine: (ha-834040) Getting domain xml...
	I0311 20:22:45.843813   27491 main.go:141] libmachine: (ha-834040) Creating domain...
	I0311 20:22:46.997557   27491 main.go:141] libmachine: (ha-834040) Waiting to get IP...
	I0311 20:22:46.998218   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:46.998632   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:46.998664   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:46.998626   27514 retry.go:31] will retry after 263.902152ms: waiting for machine to come up
	I0311 20:22:47.264098   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:47.264506   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:47.264539   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:47.264486   27514 retry.go:31] will retry after 266.30343ms: waiting for machine to come up
	I0311 20:22:47.531787   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:47.532158   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:47.532188   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:47.532111   27514 retry.go:31] will retry after 476.414298ms: waiting for machine to come up
	I0311 20:22:48.009646   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:48.010063   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:48.010096   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:48.010029   27514 retry.go:31] will retry after 600.032755ms: waiting for machine to come up
	I0311 20:22:48.611700   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:48.612092   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:48.612124   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:48.612052   27514 retry.go:31] will retry after 604.393037ms: waiting for machine to come up
	I0311 20:22:49.217955   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:49.218384   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:49.218407   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:49.218361   27514 retry.go:31] will retry after 886.712129ms: waiting for machine to come up
	I0311 20:22:50.106801   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:50.107120   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:50.107156   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:50.107081   27514 retry.go:31] will retry after 801.265373ms: waiting for machine to come up
	I0311 20:22:50.909467   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:50.909830   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:50.909857   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:50.909772   27514 retry.go:31] will retry after 1.484377047s: waiting for machine to come up
	I0311 20:22:52.396232   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:52.396652   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:52.396680   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:52.396616   27514 retry.go:31] will retry after 1.119763452s: waiting for machine to come up
	I0311 20:22:53.519124   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:53.519538   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:53.519560   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:53.519494   27514 retry.go:31] will retry after 1.725300378s: waiting for machine to come up
	I0311 20:22:55.247275   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:55.247727   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:55.247765   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:55.247697   27514 retry.go:31] will retry after 2.320384618s: waiting for machine to come up
	I0311 20:22:57.569649   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:22:57.570053   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:22:57.570076   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:22:57.570018   27514 retry.go:31] will retry after 2.529001577s: waiting for machine to come up
	I0311 20:23:00.101623   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:00.101988   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:23:00.102008   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:23:00.101952   27514 retry.go:31] will retry after 3.066008911s: waiting for machine to come up
	I0311 20:23:03.169009   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:03.169423   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find current IP address of domain ha-834040 in network mk-ha-834040
	I0311 20:23:03.169447   27491 main.go:141] libmachine: (ha-834040) DBG | I0311 20:23:03.169393   27514 retry.go:31] will retry after 3.89452115s: waiting for machine to come up
	I0311 20:23:07.065892   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.066320   27491 main.go:141] libmachine: (ha-834040) Found IP for machine: 192.168.39.128
	I0311 20:23:07.066349   27491 main.go:141] libmachine: (ha-834040) Reserving static IP address...
	I0311 20:23:07.066365   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has current primary IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.066654   27491 main.go:141] libmachine: (ha-834040) DBG | unable to find host DHCP lease matching {name: "ha-834040", mac: "52:54:00:33:6f:e8", ip: "192.168.39.128"} in network mk-ha-834040
	I0311 20:23:07.133337   27491 main.go:141] libmachine: (ha-834040) DBG | Getting to WaitForSSH function...
	I0311 20:23:07.133368   27491 main.go:141] libmachine: (ha-834040) Reserved static IP address: 192.168.39.128
	I0311 20:23:07.133415   27491 main.go:141] libmachine: (ha-834040) Waiting for SSH to be available...
	I0311 20:23:07.135659   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.135977   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.136006   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.136081   27491 main.go:141] libmachine: (ha-834040) DBG | Using SSH client type: external
	I0311 20:23:07.136103   27491 main.go:141] libmachine: (ha-834040) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa (-rw-------)
	I0311 20:23:07.136153   27491 main.go:141] libmachine: (ha-834040) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:23:07.136165   27491 main.go:141] libmachine: (ha-834040) DBG | About to run SSH command:
	I0311 20:23:07.136194   27491 main.go:141] libmachine: (ha-834040) DBG | exit 0
	I0311 20:23:07.260623   27491 main.go:141] libmachine: (ha-834040) DBG | SSH cmd err, output: <nil>: 
	I0311 20:23:07.260945   27491 main.go:141] libmachine: (ha-834040) KVM machine creation complete!
	I0311 20:23:07.261231   27491 main.go:141] libmachine: (ha-834040) Calling .GetConfigRaw
	I0311 20:23:07.261766   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:07.261936   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:07.262075   27491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 20:23:07.262086   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:23:07.263165   27491 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 20:23:07.263178   27491 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 20:23:07.263186   27491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 20:23:07.263194   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.265722   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.266057   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.266083   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.266222   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.266405   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.266531   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.266638   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.266862   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.267063   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.267075   27491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 20:23:07.368164   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:23:07.368188   27491 main.go:141] libmachine: Detecting the provisioner...
	I0311 20:23:07.368197   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.370723   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.371067   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.371102   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.371281   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.371481   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.371645   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.371800   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.371980   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.372154   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.372168   27491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 20:23:07.478232   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 20:23:07.478289   27491 main.go:141] libmachine: found compatible host: buildroot
	I0311 20:23:07.478299   27491 main.go:141] libmachine: Provisioning with buildroot...
	I0311 20:23:07.478314   27491 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:23:07.478542   27491 buildroot.go:166] provisioning hostname "ha-834040"
	I0311 20:23:07.478567   27491 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:23:07.478744   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.481281   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.481603   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.481631   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.481811   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.481970   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.482121   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.482251   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.482435   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.482624   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.482637   27491 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-834040 && echo "ha-834040" | sudo tee /etc/hostname
	I0311 20:23:07.600305   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040
	
	I0311 20:23:07.600328   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.603722   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.604058   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.604081   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.604260   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.604461   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.604611   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.604726   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.604876   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.605027   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.605049   27491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-834040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-834040/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-834040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:23:07.715195   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:23:07.715219   27491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:23:07.715240   27491 buildroot.go:174] setting up certificates
	I0311 20:23:07.715253   27491 provision.go:84] configureAuth start
	I0311 20:23:07.715277   27491 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:23:07.715561   27491 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:23:07.718036   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.718363   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.718390   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.718555   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.720656   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.721040   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.721071   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.721184   27491 provision.go:143] copyHostCerts
	I0311 20:23:07.721222   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:23:07.721280   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:23:07.721292   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:23:07.721364   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:23:07.721476   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:23:07.721501   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:23:07.721508   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:23:07.721551   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:23:07.721613   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:23:07.721640   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:23:07.721649   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:23:07.721683   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:23:07.721756   27491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.ha-834040 san=[127.0.0.1 192.168.39.128 ha-834040 localhost minikube]
	I0311 20:23:07.773153   27491 provision.go:177] copyRemoteCerts
	I0311 20:23:07.773206   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:23:07.773225   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.775507   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.775849   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.775897   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.776025   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.776204   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.776368   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.776500   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:07.862194   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:23:07.862272   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0311 20:23:07.890626   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:23:07.890683   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 20:23:07.918911   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:23:07.918960   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:23:07.945269   27491 provision.go:87] duration metric: took 229.999498ms to configureAuth
	I0311 20:23:07.945291   27491 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:23:07.945489   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:23:07.945567   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:07.947915   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.948195   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:07.948220   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:07.948405   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:07.948589   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.948757   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:07.948916   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:07.949081   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:07.949268   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:07.949284   27491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:23:08.215386   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:23:08.215412   27491 main.go:141] libmachine: Checking connection to Docker...
	I0311 20:23:08.215428   27491 main.go:141] libmachine: (ha-834040) Calling .GetURL
	I0311 20:23:08.216647   27491 main.go:141] libmachine: (ha-834040) DBG | Using libvirt version 6000000
	I0311 20:23:08.218575   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.218828   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.218861   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.219034   27491 main.go:141] libmachine: Docker is up and running!
	I0311 20:23:08.219053   27491 main.go:141] libmachine: Reticulating splines...
	I0311 20:23:08.219061   27491 client.go:171] duration metric: took 22.778035881s to LocalClient.Create
	I0311 20:23:08.219090   27491 start.go:167] duration metric: took 22.778089023s to libmachine.API.Create "ha-834040"
	I0311 20:23:08.219100   27491 start.go:293] postStartSetup for "ha-834040" (driver="kvm2")
	I0311 20:23:08.219112   27491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:23:08.219132   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.219341   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:23:08.219366   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:08.221263   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.221541   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.221572   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.221672   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:08.221840   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.221973   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:08.222090   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:08.305829   27491 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:23:08.310759   27491 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:23:08.310776   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:23:08.310837   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:23:08.310926   27491 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:23:08.310942   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:23:08.311051   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:23:08.323084   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:23:08.348503   27491 start.go:296] duration metric: took 129.392519ms for postStartSetup
	I0311 20:23:08.348536   27491 main.go:141] libmachine: (ha-834040) Calling .GetConfigRaw
	I0311 20:23:08.349153   27491 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:23:08.351581   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.351957   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.351986   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.352150   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:23:08.352309   27491 start.go:128] duration metric: took 22.927641429s to createHost
	I0311 20:23:08.352328   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:08.354293   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.354584   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.354613   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.354728   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:08.354899   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.355061   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.355221   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:08.355352   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:23:08.355518   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:23:08.355536   27491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:23:08.457684   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710188588.426853069
	
	I0311 20:23:08.457711   27491 fix.go:216] guest clock: 1710188588.426853069
	I0311 20:23:08.457721   27491 fix.go:229] Guest: 2024-03-11 20:23:08.426853069 +0000 UTC Remote: 2024-03-11 20:23:08.352319386 +0000 UTC m=+23.041906755 (delta=74.533683ms)
	I0311 20:23:08.457770   27491 fix.go:200] guest clock delta is within tolerance: 74.533683ms
	I0311 20:23:08.457777   27491 start.go:83] releasing machines lock for "ha-834040", held for 23.033177693s
	I0311 20:23:08.457798   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.458057   27491 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:23:08.460298   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.460603   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.460634   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.460782   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.461257   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.461420   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:08.461498   27491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:23:08.461535   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:08.461635   27491 ssh_runner.go:195] Run: cat /version.json
	I0311 20:23:08.461659   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:08.463986   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.464171   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.464264   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.464286   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.464440   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:08.464474   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:08.464499   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:08.464617   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.464633   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:08.464834   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:08.464838   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:08.464996   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:08.465000   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:08.465128   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:08.551817   27491 ssh_runner.go:195] Run: systemctl --version
	I0311 20:23:08.575684   27491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:23:08.737257   27491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:23:08.744645   27491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:23:08.744701   27491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:23:08.762282   27491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 20:23:08.762305   27491 start.go:494] detecting cgroup driver to use...
	I0311 20:23:08.762368   27491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:23:08.778367   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:23:08.792310   27491 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:23:08.792354   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:23:08.806314   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:23:08.821443   27491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:23:08.941704   27491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:23:09.081216   27491 docker.go:233] disabling docker service ...
	I0311 20:23:09.081287   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:23:09.097332   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:23:09.111565   27491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:23:09.250642   27491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:23:09.390462   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:23:09.405358   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:23:09.425278   27491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:23:09.425342   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:23:09.435796   27491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:23:09.435846   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:23:09.446390   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:23:09.456826   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:23:09.467528   27491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:23:09.479724   27491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:23:09.490634   27491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 20:23:09.490676   27491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 20:23:09.503618   27491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:23:09.513243   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:23:09.652368   27491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:23:09.792800   27491 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:23:09.792862   27491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:23:09.798501   27491 start.go:562] Will wait 60s for crictl version
	I0311 20:23:09.798548   27491 ssh_runner.go:195] Run: which crictl
	I0311 20:23:09.802566   27491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:23:09.841419   27491 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:23:09.841489   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:23:09.870470   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:23:09.901524   27491 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:23:09.902831   27491 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:23:09.905562   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:09.905872   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:09.905897   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:09.906097   27491 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:23:09.910532   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:23:09.923983   27491 kubeadm.go:877] updating cluster {Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 20:23:09.924069   27491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:23:09.924102   27491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:23:09.971391   27491 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 20:23:09.971453   27491 ssh_runner.go:195] Run: which lz4
	I0311 20:23:09.975521   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0311 20:23:09.975594   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 20:23:09.979798   27491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 20:23:09.979814   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 20:23:11.857810   27491 crio.go:444] duration metric: took 1.882233993s to copy over tarball
	I0311 20:23:11.857873   27491 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 20:23:14.429503   27491 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.571601285s)
	I0311 20:23:14.429530   27491 crio.go:451] duration metric: took 2.571697352s to extract the tarball
	I0311 20:23:14.429537   27491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 20:23:14.473160   27491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:23:14.527587   27491 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:23:14.527607   27491 cache_images.go:84] Images are preloaded, skipping loading
	I0311 20:23:14.527613   27491 kubeadm.go:928] updating node { 192.168.39.128 8443 v1.28.4 crio true true} ...
	I0311 20:23:14.527690   27491 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-834040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:23:14.527746   27491 ssh_runner.go:195] Run: crio config
	I0311 20:23:14.580410   27491 cni.go:84] Creating CNI manager for ""
	I0311 20:23:14.580431   27491 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0311 20:23:14.580444   27491 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 20:23:14.580462   27491 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-834040 NodeName:ha-834040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 20:23:14.580578   27491 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-834040"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 20:23:14.580598   27491 kube-vip.go:101] generating kube-vip config ...
	I0311 20:23:14.580664   27491 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 20:23:14.580707   27491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:23:14.592943   27491 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 20:23:14.592995   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0311 20:23:14.603866   27491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0311 20:23:14.622284   27491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:23:14.640597   27491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0311 20:23:14.658813   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 20:23:14.676840   27491 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0311 20:23:14.681059   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:23:14.695113   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:23:14.832190   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:23:14.851793   27491 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040 for IP: 192.168.39.128
	I0311 20:23:14.851878   27491 certs.go:194] generating shared ca certs ...
	I0311 20:23:14.851908   27491 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:14.852110   27491 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:23:14.852168   27491 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:23:14.852184   27491 certs.go:256] generating profile certs ...
	I0311 20:23:14.852245   27491 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key
	I0311 20:23:14.852266   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt with IP's: []
	I0311 20:23:14.985304   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt ...
	I0311 20:23:14.985334   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt: {Name:mk8d6d8309a1ad51304337920d227e7e5d9c0124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:14.985496   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key ...
	I0311 20:23:14.985509   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key: {Name:mk1304b5cb243ef01eb7fb761ac1e689580d776a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:14.985618   27491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.1488f95d
	I0311 20:23:14.985643   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.1488f95d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.254]
	I0311 20:23:15.178969   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.1488f95d ...
	I0311 20:23:15.179002   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.1488f95d: {Name:mk2407342d56deacb6e6a805a37e5e10b19062ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:15.179152   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.1488f95d ...
	I0311 20:23:15.179167   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.1488f95d: {Name:mkab517bae76f3fb8b939eae49568621f4bafaeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:15.179240   27491 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.1488f95d -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt
	I0311 20:23:15.179321   27491 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.1488f95d -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key
	I0311 20:23:15.179373   27491 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key
	I0311 20:23:15.179387   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt with IP's: []
	I0311 20:23:15.408046   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt ...
	I0311 20:23:15.408074   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt: {Name:mk3374ab63685e2a88ec78dcc274dc3977a541a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:15.408228   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key ...
	I0311 20:23:15.408247   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key: {Name:mk6c5436dc6ec77daf6bbc0f26adfe9debb5c3ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:15.408339   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:23:15.408360   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:23:15.408375   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:23:15.408391   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:23:15.408409   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:23:15.408428   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:23:15.408454   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:23:15.408474   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:23:15.408542   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:23:15.408586   27491 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:23:15.408603   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:23:15.408652   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:23:15.408685   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:23:15.408718   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:23:15.408789   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:23:15.408833   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:23:15.408853   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:23:15.408871   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:23:15.409491   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:23:15.438274   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:23:15.465235   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:23:15.491202   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:23:15.516538   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 20:23:15.545154   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 20:23:15.572541   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:23:15.598771   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:23:15.626643   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:23:15.652447   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:23:15.682788   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:23:15.718846   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 20:23:15.746899   27491 ssh_runner.go:195] Run: openssl version
	I0311 20:23:15.753300   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:23:15.766289   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:23:15.771507   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:23:15.771556   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:23:15.777866   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:23:15.790374   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:23:15.802710   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:23:15.807724   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:23:15.807769   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:23:15.813844   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:23:15.826184   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:23:15.839114   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:23:15.844127   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:23:15.844161   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:23:15.850235   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:23:15.862455   27491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:23:15.867062   27491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 20:23:15.867120   27491 kubeadm.go:391] StartCluster: {Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:23:15.867192   27491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 20:23:15.867247   27491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 20:23:15.910993   27491 cri.go:89] found id: ""
	I0311 20:23:15.911045   27491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 20:23:15.923549   27491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 20:23:15.938228   27491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 20:23:15.949031   27491 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 20:23:15.949048   27491 kubeadm.go:156] found existing configuration files:
	
	I0311 20:23:15.949091   27491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 20:23:15.959625   27491 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 20:23:15.959666   27491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 20:23:15.971165   27491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 20:23:15.981995   27491 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 20:23:15.982054   27491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 20:23:15.993169   27491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 20:23:16.003702   27491 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 20:23:16.003748   27491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 20:23:16.014895   27491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 20:23:16.025503   27491 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 20:23:16.025541   27491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 20:23:16.036481   27491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 20:23:16.291292   27491 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 20:23:27.584467   27491 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 20:23:27.584546   27491 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 20:23:27.584633   27491 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 20:23:27.584816   27491 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 20:23:27.584932   27491 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 20:23:27.584993   27491 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 20:23:27.586626   27491 out.go:204]   - Generating certificates and keys ...
	I0311 20:23:27.586715   27491 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 20:23:27.586779   27491 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 20:23:27.586853   27491 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 20:23:27.586919   27491 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 20:23:27.586989   27491 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 20:23:27.587037   27491 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 20:23:27.587168   27491 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 20:23:27.587329   27491 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-834040 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0311 20:23:27.587410   27491 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 20:23:27.587517   27491 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-834040 localhost] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0311 20:23:27.587594   27491 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 20:23:27.587692   27491 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 20:23:27.587769   27491 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 20:23:27.587858   27491 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 20:23:27.587961   27491 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 20:23:27.588036   27491 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 20:23:27.588140   27491 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 20:23:27.588225   27491 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 20:23:27.588346   27491 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 20:23:27.588497   27491 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 20:23:27.591045   27491 out.go:204]   - Booting up control plane ...
	I0311 20:23:27.591154   27491 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 20:23:27.591246   27491 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 20:23:27.591332   27491 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 20:23:27.591485   27491 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 20:23:27.591635   27491 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 20:23:27.591696   27491 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 20:23:27.591903   27491 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 20:23:27.592028   27491 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.611827 seconds
	I0311 20:23:27.592157   27491 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 20:23:27.592315   27491 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 20:23:27.592401   27491 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 20:23:27.592657   27491 kubeadm.go:309] [mark-control-plane] Marking the node ha-834040 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 20:23:27.592748   27491 kubeadm.go:309] [bootstrap-token] Using token: 74fjk6.c6d8spiuhr71ss8c
	I0311 20:23:27.594066   27491 out.go:204]   - Configuring RBAC rules ...
	I0311 20:23:27.594169   27491 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 20:23:27.594266   27491 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 20:23:27.594383   27491 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 20:23:27.594543   27491 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 20:23:27.594684   27491 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 20:23:27.594765   27491 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 20:23:27.594873   27491 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 20:23:27.594941   27491 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 20:23:27.595007   27491 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 20:23:27.595019   27491 kubeadm.go:309] 
	I0311 20:23:27.595082   27491 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 20:23:27.595102   27491 kubeadm.go:309] 
	I0311 20:23:27.595201   27491 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 20:23:27.595212   27491 kubeadm.go:309] 
	I0311 20:23:27.595254   27491 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 20:23:27.595304   27491 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 20:23:27.595351   27491 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 20:23:27.595357   27491 kubeadm.go:309] 
	I0311 20:23:27.595399   27491 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 20:23:27.595405   27491 kubeadm.go:309] 
	I0311 20:23:27.595462   27491 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 20:23:27.595476   27491 kubeadm.go:309] 
	I0311 20:23:27.595548   27491 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 20:23:27.595646   27491 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 20:23:27.595732   27491 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 20:23:27.595741   27491 kubeadm.go:309] 
	I0311 20:23:27.595832   27491 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 20:23:27.595921   27491 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 20:23:27.595935   27491 kubeadm.go:309] 
	I0311 20:23:27.596034   27491 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 74fjk6.c6d8spiuhr71ss8c \
	I0311 20:23:27.596129   27491 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 20:23:27.596167   27491 kubeadm.go:309] 	--control-plane 
	I0311 20:23:27.596176   27491 kubeadm.go:309] 
	I0311 20:23:27.596251   27491 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 20:23:27.596259   27491 kubeadm.go:309] 
	I0311 20:23:27.596364   27491 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 74fjk6.c6d8spiuhr71ss8c \
	I0311 20:23:27.596535   27491 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 20:23:27.596549   27491 cni.go:84] Creating CNI manager for ""
	I0311 20:23:27.596556   27491 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0311 20:23:27.598282   27491 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0311 20:23:27.599668   27491 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0311 20:23:27.613258   27491 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0311 20:23:27.613276   27491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0311 20:23:27.680863   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0311 20:23:28.782990   27491 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.102076024s)
	I0311 20:23:28.783029   27491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 20:23:28.783129   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:28.783136   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-834040 minikube.k8s.io/updated_at=2024_03_11T20_23_28_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=ha-834040 minikube.k8s.io/primary=true
	I0311 20:23:28.797118   27491 ops.go:34] apiserver oom_adj: -16
	I0311 20:23:28.936340   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:29.436475   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:29.937326   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:30.437245   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:30.937039   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:31.436433   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:31.936475   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:32.436913   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:32.936838   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:33.436996   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:33.937346   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:34.437157   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:34.936644   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:35.436749   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:35.936994   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:36.437427   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:36.937124   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:37.436981   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:37.936722   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:38.436722   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:38.936659   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:39.437313   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 20:23:39.554352   27491 kubeadm.go:1106] duration metric: took 10.771297587s to wait for elevateKubeSystemPrivileges
	W0311 20:23:39.554386   27491 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 20:23:39.554396   27491 kubeadm.go:393] duration metric: took 23.687290613s to StartCluster
	I0311 20:23:39.554417   27491 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:39.554505   27491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:23:39.555129   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:23:39.555362   27491 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:23:39.555385   27491 start.go:240] waiting for startup goroutines ...
	I0311 20:23:39.555370   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 20:23:39.555393   27491 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 20:23:39.555475   27491 addons.go:69] Setting storage-provisioner=true in profile "ha-834040"
	I0311 20:23:39.555483   27491 addons.go:69] Setting default-storageclass=true in profile "ha-834040"
	I0311 20:23:39.555513   27491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-834040"
	I0311 20:23:39.555534   27491 addons.go:234] Setting addon storage-provisioner=true in "ha-834040"
	I0311 20:23:39.555570   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:23:39.555599   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:23:39.555958   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.555965   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.555987   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.555992   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.570754   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I0311 20:23:39.570870   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44611
	I0311 20:23:39.571149   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.571274   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.571660   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.571677   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.571784   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.571802   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.572016   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.572097   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.572225   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:23:39.572632   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.572662   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.574400   27491 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:23:39.574753   27491 kapi.go:59] client config for ha-834040: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt", KeyFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key", CAFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 20:23:39.575281   27491 cert_rotation.go:137] Starting client certificate rotation controller
	I0311 20:23:39.575487   27491 addons.go:234] Setting addon default-storageclass=true in "ha-834040"
	I0311 20:23:39.575530   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:23:39.575906   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.575945   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.588200   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35557
	I0311 20:23:39.588706   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.589189   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.589212   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.589535   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.589606   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0311 20:23:39.589684   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:23:39.589917   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.590358   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.590381   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.590712   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.591324   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:39.591348   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:39.591542   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:39.593976   27491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 20:23:39.595301   27491 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 20:23:39.595319   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 20:23:39.595336   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:39.598640   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:39.599081   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:39.599102   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:39.599298   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:39.599488   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:39.599741   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:39.599890   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:39.607515   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40621
	I0311 20:23:39.607899   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:39.608364   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:39.608390   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:39.608690   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:39.608909   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:23:39.610421   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:23:39.610693   27491 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 20:23:39.610711   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 20:23:39.610729   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:23:39.613295   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:39.613655   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:23:39.613686   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:23:39.613784   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:23:39.613948   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:23:39.614075   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:23:39.614210   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:23:39.694915   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 20:23:39.805571   27491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 20:23:39.812998   27491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 20:23:40.457692   27491 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0311 20:23:40.505920   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.505946   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.506204   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.506221   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.506222   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.506229   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.506237   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.506444   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.506463   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.506476   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.506600   27491 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0311 20:23:40.506613   27491 round_trippers.go:469] Request Headers:
	I0311 20:23:40.506623   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:23:40.506634   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:23:40.517230   27491 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0311 20:23:40.517954   27491 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0311 20:23:40.517973   27491 round_trippers.go:469] Request Headers:
	I0311 20:23:40.517984   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:23:40.517990   27491 round_trippers.go:473]     Content-Type: application/json
	I0311 20:23:40.517995   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:23:40.520666   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:23:40.520938   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.520954   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.521199   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.521239   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.521253   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.765874   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.765915   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.766216   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.766236   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.766253   27491 main.go:141] libmachine: Making call to close driver server
	I0311 20:23:40.766265   27491 main.go:141] libmachine: (ha-834040) Calling .Close
	I0311 20:23:40.766270   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.766565   27491 main.go:141] libmachine: (ha-834040) DBG | Closing plugin on server side
	I0311 20:23:40.766597   27491 main.go:141] libmachine: Successfully made call to close driver server
	I0311 20:23:40.766637   27491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 20:23:40.768431   27491 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0311 20:23:40.769546   27491 addons.go:505] duration metric: took 1.214154488s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0311 20:23:40.769588   27491 start.go:245] waiting for cluster config update ...
	I0311 20:23:40.769603   27491 start.go:254] writing updated cluster config ...
	I0311 20:23:40.771148   27491 out.go:177] 
	I0311 20:23:40.772436   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:23:40.772500   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:23:40.774142   27491 out.go:177] * Starting "ha-834040-m02" control-plane node in "ha-834040" cluster
	I0311 20:23:40.775808   27491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:23:40.775830   27491 cache.go:56] Caching tarball of preloaded images
	I0311 20:23:40.775924   27491 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:23:40.775937   27491 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:23:40.776026   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:23:40.776355   27491 start.go:360] acquireMachinesLock for ha-834040-m02: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:23:40.776405   27491 start.go:364] duration metric: took 29.972µs to acquireMachinesLock for "ha-834040-m02"
	I0311 20:23:40.776429   27491 start.go:93] Provisioning new machine with config: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:23:40.776499   27491 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0311 20:23:40.778149   27491 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 20:23:40.778231   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:23:40.778259   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:23:40.792485   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I0311 20:23:40.792879   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:23:40.793313   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:23:40.793334   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:23:40.793623   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:23:40.793831   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetMachineName
	I0311 20:23:40.793986   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:23:40.794127   27491 start.go:159] libmachine.API.Create for "ha-834040" (driver="kvm2")
	I0311 20:23:40.794165   27491 client.go:168] LocalClient.Create starting
	I0311 20:23:40.794208   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 20:23:40.794257   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:23:40.794271   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:23:40.794326   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 20:23:40.794344   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:23:40.794354   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:23:40.794369   27491 main.go:141] libmachine: Running pre-create checks...
	I0311 20:23:40.794376   27491 main.go:141] libmachine: (ha-834040-m02) Calling .PreCreateCheck
	I0311 20:23:40.794530   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetConfigRaw
	I0311 20:23:40.794857   27491 main.go:141] libmachine: Creating machine...
	I0311 20:23:40.794869   27491 main.go:141] libmachine: (ha-834040-m02) Calling .Create
	I0311 20:23:40.794986   27491 main.go:141] libmachine: (ha-834040-m02) Creating KVM machine...
	I0311 20:23:40.796130   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found existing default KVM network
	I0311 20:23:40.796227   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found existing private KVM network mk-ha-834040
	I0311 20:23:40.796357   27491 main.go:141] libmachine: (ha-834040-m02) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02 ...
	I0311 20:23:40.796383   27491 main.go:141] libmachine: (ha-834040-m02) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 20:23:40.796447   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:40.796348   27831 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:23:40.796564   27491 main.go:141] libmachine: (ha-834040-m02) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 20:23:41.004705   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:41.004590   27831 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa...
	I0311 20:23:41.167886   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:41.167788   27831 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/ha-834040-m02.rawdisk...
	I0311 20:23:41.167935   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Writing magic tar header
	I0311 20:23:41.167948   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Writing SSH key tar header
	I0311 20:23:41.168800   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:41.168642   27831 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02 ...
	I0311 20:23:41.169426   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02
	I0311 20:23:41.169447   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 20:23:41.169460   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02 (perms=drwx------)
	I0311 20:23:41.169472   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 20:23:41.169483   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 20:23:41.169499   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 20:23:41.169512   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 20:23:41.169525   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:23:41.169541   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 20:23:41.169559   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 20:23:41.169572   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home/jenkins
	I0311 20:23:41.169580   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Checking permissions on dir: /home
	I0311 20:23:41.169592   27491 main.go:141] libmachine: (ha-834040-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 20:23:41.169606   27491 main.go:141] libmachine: (ha-834040-m02) Creating domain...
	I0311 20:23:41.169626   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Skipping /home - not owner
	I0311 20:23:41.170361   27491 main.go:141] libmachine: (ha-834040-m02) define libvirt domain using xml: 
	I0311 20:23:41.170382   27491 main.go:141] libmachine: (ha-834040-m02) <domain type='kvm'>
	I0311 20:23:41.170393   27491 main.go:141] libmachine: (ha-834040-m02)   <name>ha-834040-m02</name>
	I0311 20:23:41.170401   27491 main.go:141] libmachine: (ha-834040-m02)   <memory unit='MiB'>2200</memory>
	I0311 20:23:41.170410   27491 main.go:141] libmachine: (ha-834040-m02)   <vcpu>2</vcpu>
	I0311 20:23:41.170424   27491 main.go:141] libmachine: (ha-834040-m02)   <features>
	I0311 20:23:41.170437   27491 main.go:141] libmachine: (ha-834040-m02)     <acpi/>
	I0311 20:23:41.170444   27491 main.go:141] libmachine: (ha-834040-m02)     <apic/>
	I0311 20:23:41.170451   27491 main.go:141] libmachine: (ha-834040-m02)     <pae/>
	I0311 20:23:41.170456   27491 main.go:141] libmachine: (ha-834040-m02)     
	I0311 20:23:41.170462   27491 main.go:141] libmachine: (ha-834040-m02)   </features>
	I0311 20:23:41.170469   27491 main.go:141] libmachine: (ha-834040-m02)   <cpu mode='host-passthrough'>
	I0311 20:23:41.170475   27491 main.go:141] libmachine: (ha-834040-m02)   
	I0311 20:23:41.170481   27491 main.go:141] libmachine: (ha-834040-m02)   </cpu>
	I0311 20:23:41.170487   27491 main.go:141] libmachine: (ha-834040-m02)   <os>
	I0311 20:23:41.170492   27491 main.go:141] libmachine: (ha-834040-m02)     <type>hvm</type>
	I0311 20:23:41.170517   27491 main.go:141] libmachine: (ha-834040-m02)     <boot dev='cdrom'/>
	I0311 20:23:41.170540   27491 main.go:141] libmachine: (ha-834040-m02)     <boot dev='hd'/>
	I0311 20:23:41.170550   27491 main.go:141] libmachine: (ha-834040-m02)     <bootmenu enable='no'/>
	I0311 20:23:41.170560   27491 main.go:141] libmachine: (ha-834040-m02)   </os>
	I0311 20:23:41.170569   27491 main.go:141] libmachine: (ha-834040-m02)   <devices>
	I0311 20:23:41.170580   27491 main.go:141] libmachine: (ha-834040-m02)     <disk type='file' device='cdrom'>
	I0311 20:23:41.170591   27491 main.go:141] libmachine: (ha-834040-m02)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/boot2docker.iso'/>
	I0311 20:23:41.170598   27491 main.go:141] libmachine: (ha-834040-m02)       <target dev='hdc' bus='scsi'/>
	I0311 20:23:41.170603   27491 main.go:141] libmachine: (ha-834040-m02)       <readonly/>
	I0311 20:23:41.170613   27491 main.go:141] libmachine: (ha-834040-m02)     </disk>
	I0311 20:23:41.170627   27491 main.go:141] libmachine: (ha-834040-m02)     <disk type='file' device='disk'>
	I0311 20:23:41.170647   27491 main.go:141] libmachine: (ha-834040-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 20:23:41.170678   27491 main.go:141] libmachine: (ha-834040-m02)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/ha-834040-m02.rawdisk'/>
	I0311 20:23:41.170690   27491 main.go:141] libmachine: (ha-834040-m02)       <target dev='hda' bus='virtio'/>
	I0311 20:23:41.170702   27491 main.go:141] libmachine: (ha-834040-m02)     </disk>
	I0311 20:23:41.170714   27491 main.go:141] libmachine: (ha-834040-m02)     <interface type='network'>
	I0311 20:23:41.170725   27491 main.go:141] libmachine: (ha-834040-m02)       <source network='mk-ha-834040'/>
	I0311 20:23:41.170732   27491 main.go:141] libmachine: (ha-834040-m02)       <model type='virtio'/>
	I0311 20:23:41.170739   27491 main.go:141] libmachine: (ha-834040-m02)     </interface>
	I0311 20:23:41.170756   27491 main.go:141] libmachine: (ha-834040-m02)     <interface type='network'>
	I0311 20:23:41.170769   27491 main.go:141] libmachine: (ha-834040-m02)       <source network='default'/>
	I0311 20:23:41.170780   27491 main.go:141] libmachine: (ha-834040-m02)       <model type='virtio'/>
	I0311 20:23:41.170791   27491 main.go:141] libmachine: (ha-834040-m02)     </interface>
	I0311 20:23:41.170804   27491 main.go:141] libmachine: (ha-834040-m02)     <serial type='pty'>
	I0311 20:23:41.170830   27491 main.go:141] libmachine: (ha-834040-m02)       <target port='0'/>
	I0311 20:23:41.170852   27491 main.go:141] libmachine: (ha-834040-m02)     </serial>
	I0311 20:23:41.170867   27491 main.go:141] libmachine: (ha-834040-m02)     <console type='pty'>
	I0311 20:23:41.170880   27491 main.go:141] libmachine: (ha-834040-m02)       <target type='serial' port='0'/>
	I0311 20:23:41.170893   27491 main.go:141] libmachine: (ha-834040-m02)     </console>
	I0311 20:23:41.170904   27491 main.go:141] libmachine: (ha-834040-m02)     <rng model='virtio'>
	I0311 20:23:41.170916   27491 main.go:141] libmachine: (ha-834040-m02)       <backend model='random'>/dev/random</backend>
	I0311 20:23:41.170931   27491 main.go:141] libmachine: (ha-834040-m02)     </rng>
	I0311 20:23:41.170943   27491 main.go:141] libmachine: (ha-834040-m02)     
	I0311 20:23:41.170950   27491 main.go:141] libmachine: (ha-834040-m02)     
	I0311 20:23:41.170974   27491 main.go:141] libmachine: (ha-834040-m02)   </devices>
	I0311 20:23:41.170984   27491 main.go:141] libmachine: (ha-834040-m02) </domain>
	I0311 20:23:41.171018   27491 main.go:141] libmachine: (ha-834040-m02) 
	I0311 20:23:41.177811   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:58:dc:7f in network default
	I0311 20:23:41.178336   27491 main.go:141] libmachine: (ha-834040-m02) Ensuring networks are active...
	I0311 20:23:41.178354   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:41.178969   27491 main.go:141] libmachine: (ha-834040-m02) Ensuring network default is active
	I0311 20:23:41.179227   27491 main.go:141] libmachine: (ha-834040-m02) Ensuring network mk-ha-834040 is active
	I0311 20:23:41.179558   27491 main.go:141] libmachine: (ha-834040-m02) Getting domain xml...
	I0311 20:23:41.180207   27491 main.go:141] libmachine: (ha-834040-m02) Creating domain...
	I0311 20:23:42.419030   27491 main.go:141] libmachine: (ha-834040-m02) Waiting to get IP...
	I0311 20:23:42.419932   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:42.420385   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:42.420413   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:42.420368   27831 retry.go:31] will retry after 217.532188ms: waiting for machine to come up
	I0311 20:23:42.639741   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:42.640457   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:42.640500   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:42.640372   27831 retry.go:31] will retry after 333.50749ms: waiting for machine to come up
	I0311 20:23:42.976015   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:42.976464   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:42.976498   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:42.976413   27831 retry.go:31] will retry after 394.228373ms: waiting for machine to come up
	I0311 20:23:43.372441   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:43.372843   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:43.372899   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:43.372814   27831 retry.go:31] will retry after 486.843036ms: waiting for machine to come up
	I0311 20:23:43.861414   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:43.861827   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:43.861854   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:43.861782   27831 retry.go:31] will retry after 613.031869ms: waiting for machine to come up
	I0311 20:23:44.476018   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:44.476408   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:44.476436   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:44.476359   27831 retry.go:31] will retry after 651.873525ms: waiting for machine to come up
	I0311 20:23:45.130232   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:45.130649   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:45.130672   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:45.130601   27831 retry.go:31] will retry after 1.171639293s: waiting for machine to come up
	I0311 20:23:46.303731   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:46.304221   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:46.304283   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:46.304202   27831 retry.go:31] will retry after 1.432679492s: waiting for machine to come up
	I0311 20:23:47.738705   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:47.739138   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:47.739164   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:47.739097   27831 retry.go:31] will retry after 1.483296056s: waiting for machine to come up
	I0311 20:23:49.224835   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:49.225279   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:49.225309   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:49.225215   27831 retry.go:31] will retry after 1.659262357s: waiting for machine to come up
	I0311 20:23:50.886341   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:50.886726   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:50.886753   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:50.886665   27831 retry.go:31] will retry after 2.704023891s: waiting for machine to come up
	I0311 20:23:53.593500   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:53.593958   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:53.593981   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:53.593899   27831 retry.go:31] will retry after 3.13007858s: waiting for machine to come up
	I0311 20:23:56.725318   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:56.725702   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:56.725730   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:56.725658   27831 retry.go:31] will retry after 3.149880361s: waiting for machine to come up
	I0311 20:23:59.877708   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:23:59.878066   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find current IP address of domain ha-834040-m02 in network mk-ha-834040
	I0311 20:23:59.878103   27491 main.go:141] libmachine: (ha-834040-m02) DBG | I0311 20:23:59.878035   27831 retry.go:31] will retry after 3.423556103s: waiting for machine to come up
	I0311 20:24:03.304140   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:03.304598   27491 main.go:141] libmachine: (ha-834040-m02) Found IP for machine: 192.168.39.101
	I0311 20:24:03.304628   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has current primary IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:03.304641   27491 main.go:141] libmachine: (ha-834040-m02) Reserving static IP address...
	I0311 20:24:03.305014   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find host DHCP lease matching {name: "ha-834040-m02", mac: "52:54:00:82:4e:e5", ip: "192.168.39.101"} in network mk-ha-834040
	I0311 20:24:03.374130   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Getting to WaitForSSH function...
	I0311 20:24:03.374159   27491 main.go:141] libmachine: (ha-834040-m02) Reserved static IP address: 192.168.39.101
	I0311 20:24:03.374173   27491 main.go:141] libmachine: (ha-834040-m02) Waiting for SSH to be available...
	I0311 20:24:03.376636   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:03.377035   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040
	I0311 20:24:03.377063   27491 main.go:141] libmachine: (ha-834040-m02) DBG | unable to find defined IP address of network mk-ha-834040 interface with MAC address 52:54:00:82:4e:e5
	I0311 20:24:03.377200   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using SSH client type: external
	I0311 20:24:03.377226   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa (-rw-------)
	I0311 20:24:03.377274   27491 main.go:141] libmachine: (ha-834040-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:24:03.377295   27491 main.go:141] libmachine: (ha-834040-m02) DBG | About to run SSH command:
	I0311 20:24:03.377313   27491 main.go:141] libmachine: (ha-834040-m02) DBG | exit 0
	I0311 20:24:03.380589   27491 main.go:141] libmachine: (ha-834040-m02) DBG | SSH cmd err, output: exit status 255: 
	I0311 20:24:03.380611   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0311 20:24:03.380620   27491 main.go:141] libmachine: (ha-834040-m02) DBG | command : exit 0
	I0311 20:24:03.380628   27491 main.go:141] libmachine: (ha-834040-m02) DBG | err     : exit status 255
	I0311 20:24:03.380639   27491 main.go:141] libmachine: (ha-834040-m02) DBG | output  : 
	I0311 20:24:06.380794   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Getting to WaitForSSH function...
	I0311 20:24:06.383297   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.383746   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.383779   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.383905   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using SSH client type: external
	I0311 20:24:06.383933   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa (-rw-------)
	I0311 20:24:06.383971   27491 main.go:141] libmachine: (ha-834040-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:24:06.383986   27491 main.go:141] libmachine: (ha-834040-m02) DBG | About to run SSH command:
	I0311 20:24:06.384001   27491 main.go:141] libmachine: (ha-834040-m02) DBG | exit 0
	I0311 20:24:06.513079   27491 main.go:141] libmachine: (ha-834040-m02) DBG | SSH cmd err, output: <nil>: 
	I0311 20:24:06.513322   27491 main.go:141] libmachine: (ha-834040-m02) KVM machine creation complete!
	I0311 20:24:06.513618   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetConfigRaw
	I0311 20:24:06.514112   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:06.514295   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:06.514454   27491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 20:24:06.514472   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:24:06.515648   27491 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 20:24:06.515662   27491 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 20:24:06.515670   27491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 20:24:06.515688   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.517702   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.518022   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.518046   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.518173   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:06.518319   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.518466   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.518590   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:06.518760   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:06.518949   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:06.518970   27491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 20:24:06.620010   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:24:06.620036   27491 main.go:141] libmachine: Detecting the provisioner...
	I0311 20:24:06.620043   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.622606   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.622909   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.622923   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.623126   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:06.623323   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.623481   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.623627   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:06.623786   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:06.623952   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:06.623962   27491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 20:24:06.729728   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 20:24:06.729793   27491 main.go:141] libmachine: found compatible host: buildroot
	I0311 20:24:06.729802   27491 main.go:141] libmachine: Provisioning with buildroot...
	I0311 20:24:06.729809   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetMachineName
	I0311 20:24:06.729997   27491 buildroot.go:166] provisioning hostname "ha-834040-m02"
	I0311 20:24:06.730024   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetMachineName
	I0311 20:24:06.730223   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.732708   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.733081   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.733108   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.733237   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:06.733439   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.733607   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.733765   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:06.733912   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:06.734117   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:06.734141   27491 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-834040-m02 && echo "ha-834040-m02" | sudo tee /etc/hostname
	I0311 20:24:06.852819   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040-m02
	
	I0311 20:24:06.852848   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.855276   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.855581   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.855610   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.855733   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:06.855923   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.856126   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:06.856276   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:06.856421   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:06.856595   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:06.856617   27491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-834040-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-834040-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-834040-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:24:06.974343   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:24:06.974366   27491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:24:06.974392   27491 buildroot.go:174] setting up certificates
	I0311 20:24:06.974403   27491 provision.go:84] configureAuth start
	I0311 20:24:06.974415   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetMachineName
	I0311 20:24:06.974661   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:24:06.976862   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.977166   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.977193   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.977289   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:06.979104   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.979416   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:06.979436   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:06.979546   27491 provision.go:143] copyHostCerts
	I0311 20:24:06.979573   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:24:06.979599   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:24:06.979608   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:24:06.979668   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:24:06.979730   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:24:06.979748   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:24:06.979754   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:24:06.979776   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:24:06.979814   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:24:06.979831   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:24:06.979834   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:24:06.979855   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:24:06.979896   27491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.ha-834040-m02 san=[127.0.0.1 192.168.39.101 ha-834040-m02 localhost minikube]
	I0311 20:24:07.106447   27491 provision.go:177] copyRemoteCerts
	I0311 20:24:07.106502   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:24:07.106523   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.108974   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.109246   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.109281   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.109405   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.109577   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.109734   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.109896   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:24:07.191972   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:24:07.192039   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:24:07.221996   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:24:07.222049   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 20:24:07.251245   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:24:07.251301   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 20:24:07.281208   27491 provision.go:87] duration metric: took 306.794898ms to configureAuth
	I0311 20:24:07.281232   27491 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:24:07.281395   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:24:07.281485   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.283952   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.284307   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.284335   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.284465   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.284642   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.284845   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.285023   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.285227   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:07.285436   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:07.285459   27491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:24:07.587453   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:24:07.587482   27491 main.go:141] libmachine: Checking connection to Docker...
	I0311 20:24:07.587489   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetURL
	I0311 20:24:07.588905   27491 main.go:141] libmachine: (ha-834040-m02) DBG | Using libvirt version 6000000
	I0311 20:24:07.590987   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.591311   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.591341   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.591489   27491 main.go:141] libmachine: Docker is up and running!
	I0311 20:24:07.591503   27491 main.go:141] libmachine: Reticulating splines...
	I0311 20:24:07.591509   27491 client.go:171] duration metric: took 26.797329558s to LocalClient.Create
	I0311 20:24:07.591527   27491 start.go:167] duration metric: took 26.797403966s to libmachine.API.Create "ha-834040"
	I0311 20:24:07.591536   27491 start.go:293] postStartSetup for "ha-834040-m02" (driver="kvm2")
	I0311 20:24:07.591545   27491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:24:07.591568   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.591788   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:24:07.591815   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.593777   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.594109   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.594136   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.594241   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.594411   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.594558   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.594681   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:24:07.676592   27491 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:24:07.681314   27491 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:24:07.681335   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:24:07.681401   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:24:07.681489   27491 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:24:07.681500   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:24:07.681597   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:24:07.692193   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:24:07.717649   27491 start.go:296] duration metric: took 126.100619ms for postStartSetup
	I0311 20:24:07.717720   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetConfigRaw
	I0311 20:24:07.718239   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:24:07.720677   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.721071   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.721102   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.721270   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:24:07.721428   27491 start.go:128] duration metric: took 26.944919506s to createHost
	I0311 20:24:07.721447   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.723303   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.723569   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.723598   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.723721   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.723920   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.724073   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.724183   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.724333   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:24:07.724482   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0311 20:24:07.724492   27491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:24:07.830939   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710188647.805297545
	
	I0311 20:24:07.830964   27491 fix.go:216] guest clock: 1710188647.805297545
	I0311 20:24:07.830975   27491 fix.go:229] Guest: 2024-03-11 20:24:07.805297545 +0000 UTC Remote: 2024-03-11 20:24:07.721438169 +0000 UTC m=+82.411025538 (delta=83.859376ms)
	I0311 20:24:07.830998   27491 fix.go:200] guest clock delta is within tolerance: 83.859376ms
	I0311 20:24:07.831010   27491 start.go:83] releasing machines lock for "ha-834040-m02", held for 27.054592054s
	I0311 20:24:07.831037   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.831292   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:24:07.833986   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.834320   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.834348   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.836593   27491 out.go:177] * Found network options:
	I0311 20:24:07.837897   27491 out.go:177]   - NO_PROXY=192.168.39.128
	W0311 20:24:07.839082   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 20:24:07.839106   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.839584   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.839758   27491 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:24:07.839848   27491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:24:07.839884   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	W0311 20:24:07.839927   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 20:24:07.839993   27491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:24:07.840015   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:24:07.842346   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.842683   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.842709   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.842727   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.842853   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.843023   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.843164   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.843188   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:07.843215   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:07.843314   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:24:07.843330   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:24:07.843473   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:24:07.843609   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:24:07.843756   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:24:08.078887   27491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:24:08.085642   27491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:24:08.085706   27491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:24:08.102794   27491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 20:24:08.102821   27491 start.go:494] detecting cgroup driver to use...
	I0311 20:24:08.102876   27491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:24:08.119750   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:24:08.134083   27491 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:24:08.134122   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:24:08.148007   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:24:08.161964   27491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:24:08.286584   27491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:24:08.464092   27491 docker.go:233] disabling docker service ...
	I0311 20:24:08.464189   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:24:08.480143   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:24:08.493994   27491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:24:08.619937   27491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:24:08.741948   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:24:08.760924   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:24:08.784235   27491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:24:08.784287   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:24:08.795915   27491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:24:08.795961   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:24:08.807425   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:24:08.819277   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:24:08.830931   27491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:24:08.842775   27491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:24:08.853248   27491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 20:24:08.853293   27491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 20:24:08.868526   27491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:24:08.879401   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:24:08.996543   27491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:24:09.139370   27491 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:24:09.139462   27491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:24:09.144664   27491 start.go:562] Will wait 60s for crictl version
	I0311 20:24:09.144714   27491 ssh_runner.go:195] Run: which crictl
	I0311 20:24:09.149034   27491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:24:09.185654   27491 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:24:09.185731   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:24:09.215883   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:24:09.247430   27491 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:24:09.248714   27491 out.go:177]   - env NO_PROXY=192.168.39.128
	I0311 20:24:09.249991   27491 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:24:09.252590   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:09.252997   27491 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:56 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:24:09.253022   27491 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:24:09.253192   27491 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:24:09.257888   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:24:09.271482   27491 mustload.go:65] Loading cluster: ha-834040
	I0311 20:24:09.271645   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:24:09.271876   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:24:09.271915   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:24:09.286805   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0311 20:24:09.287155   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:24:09.287645   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:24:09.287672   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:24:09.287951   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:24:09.288128   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:24:09.289622   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:24:09.289892   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:24:09.289922   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:24:09.303513   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0311 20:24:09.303833   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:24:09.304235   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:24:09.304257   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:24:09.304587   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:24:09.304763   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:24:09.304908   27491 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040 for IP: 192.168.39.101
	I0311 20:24:09.304920   27491 certs.go:194] generating shared ca certs ...
	I0311 20:24:09.304938   27491 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:24:09.305043   27491 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:24:09.305081   27491 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:24:09.305090   27491 certs.go:256] generating profile certs ...
	I0311 20:24:09.305155   27491 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key
	I0311 20:24:09.305175   27491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.2645eb02
	I0311 20:24:09.305188   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.2645eb02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.101 192.168.39.254]
	I0311 20:24:09.446752   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.2645eb02 ...
	I0311 20:24:09.446779   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.2645eb02: {Name:mk1103d1562a60daa1f3efd4d01a6beca972a730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:24:09.446934   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.2645eb02 ...
	I0311 20:24:09.446945   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.2645eb02: {Name:mk32d0fe2fab477620d0edc7e12451103a7a72fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:24:09.447011   27491 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.2645eb02 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt
	I0311 20:24:09.447132   27491 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.2645eb02 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key
	I0311 20:24:09.447249   27491 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key
	I0311 20:24:09.447264   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:24:09.447276   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:24:09.447288   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:24:09.447303   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:24:09.447315   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:24:09.447327   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:24:09.447339   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:24:09.447350   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:24:09.447391   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:24:09.447418   27491 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:24:09.447428   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:24:09.447449   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:24:09.447475   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:24:09.447495   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:24:09.447531   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:24:09.447556   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:24:09.447569   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:24:09.447581   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:24:09.447607   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:24:09.450220   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:24:09.450652   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:24:09.450673   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:24:09.450855   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:24:09.451043   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:24:09.451215   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:24:09.451375   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:24:09.525043   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0311 20:24:09.531003   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0311 20:24:09.544207   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0311 20:24:09.549189   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0311 20:24:09.570445   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0311 20:24:09.576168   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0311 20:24:09.593681   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0311 20:24:09.598663   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0311 20:24:09.612987   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0311 20:24:09.617778   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0311 20:24:09.630741   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0311 20:24:09.635763   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0311 20:24:09.647679   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:24:09.678481   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:24:09.704624   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:24:09.730316   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:24:09.755545   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0311 20:24:09.781200   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 20:24:09.808784   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:24:09.836481   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:24:09.863455   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:24:09.889122   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:24:09.914811   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:24:09.940210   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0311 20:24:09.957419   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0311 20:24:09.975644   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0311 20:24:09.993250   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0311 20:24:10.011124   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0311 20:24:10.029276   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0311 20:24:10.047396   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0311 20:24:10.066361   27491 ssh_runner.go:195] Run: openssl version
	I0311 20:24:10.072094   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:24:10.082958   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:24:10.087473   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:24:10.087509   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:24:10.093233   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:24:10.103994   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:24:10.114513   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:24:10.119081   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:24:10.119135   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:24:10.125133   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:24:10.136484   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:24:10.147342   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:24:10.152013   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:24:10.152054   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:24:10.157909   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:24:10.168551   27491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:24:10.172884   27491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 20:24:10.172928   27491 kubeadm.go:928] updating node {m02 192.168.39.101 8443 v1.28.4 crio true true} ...
	I0311 20:24:10.173005   27491 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-834040-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:24:10.173034   27491 kube-vip.go:101] generating kube-vip config ...
	I0311 20:24:10.173065   27491 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 20:24:10.173101   27491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:24:10.182341   27491 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0311 20:24:10.182376   27491 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0311 20:24:10.191773   27491 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0311 20:24:10.191803   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0311 20:24:10.191866   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0311 20:24:10.191896   27491 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0311 20:24:10.191912   27491 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0311 20:24:10.197721   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0311 20:24:10.197742   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0311 20:24:11.331282   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:24:11.346321   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0311 20:24:11.346398   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0311 20:24:11.350883   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0311 20:24:11.350906   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0311 20:24:14.211514   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0311 20:24:14.211607   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0311 20:24:14.217392   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0311 20:24:14.217428   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0311 20:24:14.491181   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0311 20:24:14.503700   27491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0311 20:24:14.524204   27491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:24:14.543304   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 20:24:14.561587   27491 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0311 20:24:14.567737   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:24:14.582271   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:24:14.719542   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:24:14.738631   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:24:14.738940   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:24:14.738982   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:24:14.753758   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I0311 20:24:14.754152   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:24:14.754595   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:24:14.754615   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:24:14.755009   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:24:14.755218   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:24:14.755377   27491 start.go:316] joinCluster: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:24:14.755460   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0311 20:24:14.755474   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:24:14.758430   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:24:14.758848   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:24:14.758878   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:24:14.759017   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:24:14.759176   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:24:14.759323   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:24:14.759477   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:24:14.933008   27491 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:24:14.933048   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c16koq.5cz3h51ea7m9fsz3 --discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-834040-m02 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443"
	I0311 20:24:56.000347   27491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c16koq.5cz3h51ea7m9fsz3 --discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-834040-m02 --control-plane --apiserver-advertise-address=192.168.39.101 --apiserver-bind-port=8443": (41.067273974s)
	I0311 20:24:56.000374   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0311 20:24:56.434187   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-834040-m02 minikube.k8s.io/updated_at=2024_03_11T20_24_56_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=ha-834040 minikube.k8s.io/primary=false
	I0311 20:24:56.614310   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-834040-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0311 20:24:56.741847   27491 start.go:318] duration metric: took 41.986464707s to joinCluster
	I0311 20:24:56.741918   27491 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:24:56.743311   27491 out.go:177] * Verifying Kubernetes components...
	I0311 20:24:56.742229   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:24:56.744599   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:24:57.054229   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:24:57.095064   27491 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:24:57.095365   27491 kapi.go:59] client config for ha-834040: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt", KeyFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key", CAFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0311 20:24:57.095447   27491 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.128:8443
	I0311 20:24:57.095729   27491 node_ready.go:35] waiting up to 6m0s for node "ha-834040-m02" to be "Ready" ...
	I0311 20:24:57.095856   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:57.095869   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:57.095880   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:57.095886   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:57.108596   27491 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0311 20:24:57.596732   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:57.596768   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:57.596779   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:57.596784   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:57.603205   27491 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 20:24:58.096019   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:58.096043   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:58.096055   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:58.096061   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:58.100634   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:24:58.596289   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:58.596314   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:58.596324   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:58.596329   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:58.599790   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:24:59.095922   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:59.095943   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:59.095950   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:59.095956   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:59.099193   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:24:59.099924   27491 node_ready.go:53] node "ha-834040-m02" has status "Ready":"False"
	I0311 20:24:59.596329   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:24:59.596354   27491 round_trippers.go:469] Request Headers:
	I0311 20:24:59.596367   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:24:59.596372   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:24:59.600230   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:00.096684   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:00.096706   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:00.096714   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:00.096717   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:00.100756   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:00.596482   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:00.596512   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:00.596520   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:00.596532   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:00.600066   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:01.096615   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:01.096639   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:01.096651   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:01.096656   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:01.101070   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:01.101733   27491 node_ready.go:53] node "ha-834040-m02" has status "Ready":"False"
	I0311 20:25:01.596010   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:01.596030   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:01.596038   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:01.596042   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:01.599178   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.096182   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:02.096202   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.096210   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.096214   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.100382   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:02.101034   27491 node_ready.go:49] node "ha-834040-m02" has status "Ready":"True"
	I0311 20:25:02.101054   27491 node_ready.go:38] duration metric: took 5.005300284s for node "ha-834040-m02" to be "Ready" ...
	I0311 20:25:02.101065   27491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:25:02.101155   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:02.101166   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.101176   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.101181   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.105954   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:02.113925   27491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.114005   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d6f2x
	I0311 20:25:02.114016   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.114026   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.114032   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.117033   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.117526   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.117539   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.117549   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.117556   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.120449   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.121030   27491 pod_ready.go:92] pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.121047   27491 pod_ready.go:81] duration metric: took 7.103461ms for pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.121055   27491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.121100   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kq47h
	I0311 20:25:02.121108   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.121114   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.121120   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.123680   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.124425   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.124437   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.124444   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.124447   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.127504   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.128051   27491 pod_ready.go:92] pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.128066   27491 pod_ready.go:81] duration metric: took 7.00259ms for pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.128074   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.128159   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040
	I0311 20:25:02.128168   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.128174   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.128180   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.130803   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.132071   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.132084   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.132093   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.132098   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.134883   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.135662   27491 pod_ready.go:92] pod "etcd-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.135676   27491 pod_ready.go:81] duration metric: took 7.594242ms for pod "etcd-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.135683   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.135726   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040-m02
	I0311 20:25:02.135737   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.135746   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.135756   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.138263   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.138832   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:02.138849   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.138859   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.138864   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.141780   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:02.142811   27491 pod_ready.go:92] pod "etcd-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.142827   27491 pod_ready.go:81] duration metric: took 7.138293ms for pod "etcd-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.142838   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.297182   27491 request.go:629] Waited for 154.299948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040
	I0311 20:25:02.297239   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040
	I0311 20:25:02.297245   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.297255   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.297262   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.301120   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.497222   27491 request.go:629] Waited for 195.354737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.497268   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:02.497273   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.497280   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.497285   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.500580   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.501086   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:02.501104   27491 pod_ready.go:81] duration metric: took 358.258018ms for pod "kube-apiserver-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.501127   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:02.697194   27491 request.go:629] Waited for 195.986873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:02.697261   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:02.697270   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.697277   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.697281   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.700555   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:02.896728   27491 request.go:629] Waited for 195.42862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:02.896824   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:02.896833   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:02.896843   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:02.896852   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:02.900841   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:03.096916   27491 request.go:629] Waited for 95.254865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:03.096997   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:03.097008   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:03.097019   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:03.097027   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:03.100889   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:03.297149   27491 request.go:629] Waited for 195.361273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:03.297198   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:03.297203   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:03.297213   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:03.297219   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:03.301383   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:03.502146   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:03.502166   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:03.502174   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:03.502178   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:03.507151   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:03.696144   27491 request.go:629] Waited for 188.292165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:03.696222   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:03.696227   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:03.696235   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:03.696238   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:03.699439   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:04.002211   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:04.002232   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:04.002240   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:04.002244   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:04.007636   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:25:04.096473   27491 request.go:629] Waited for 88.234909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:04.096516   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:04.096521   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:04.096529   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:04.096540   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:04.099659   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:04.501599   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:04.501625   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:04.501646   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:04.501652   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:04.505133   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:04.505970   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:04.505985   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:04.505993   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:04.505999   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:04.508683   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:04.509590   27491 pod_ready.go:102] pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace has status "Ready":"False"
	I0311 20:25:05.001429   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:05.001456   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:05.001468   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:05.001476   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:05.005521   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:05.006316   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:05.006330   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:05.006342   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:05.006346   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:05.009333   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:05.502069   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:05.502086   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:05.502094   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:05.502097   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:05.505566   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:05.506322   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:05.506337   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:05.506344   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:05.506347   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:05.509177   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:25:06.001481   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:06.001501   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:06.001508   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:06.001512   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:06.005575   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:06.006743   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:06.006755   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:06.006762   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:06.006767   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:06.010301   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:06.501382   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:06.501404   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:06.501411   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:06.501416   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:06.506322   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:06.507053   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:06.507069   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:06.507076   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:06.507080   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:06.510691   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:06.511103   27491 pod_ready.go:102] pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace has status "Ready":"False"
	I0311 20:25:07.001462   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:25:07.001481   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.001489   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.001492   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.007649   27491 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 20:25:07.008261   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.008276   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.008284   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.008287   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.015581   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:25:07.016178   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:07.016195   27491 pod_ready.go:81] duration metric: took 4.515042161s for pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.016204   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.016257   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040
	I0311 20:25:07.016265   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.016272   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.016277   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.020024   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:07.020704   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:07.020716   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.020723   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.020727   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.024370   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:07.024860   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:07.024875   27491 pod_ready.go:81] duration metric: took 8.665178ms for pod "kube-controller-manager-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.024883   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.097186   27491 request.go:629] Waited for 72.263494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m02
	I0311 20:25:07.097259   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m02
	I0311 20:25:07.097268   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.097279   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.097292   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.102859   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:25:07.296221   27491 request.go:629] Waited for 192.271341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.296274   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.296288   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.296313   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.296320   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.300428   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:07.301029   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:07.301047   27491 pod_ready.go:81] duration metric: took 276.158386ms for pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.301056   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dsjx4" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.496442   27491 request.go:629] Waited for 195.329737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsjx4
	I0311 20:25:07.496500   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsjx4
	I0311 20:25:07.496505   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.496513   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.496518   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.500079   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:07.697138   27491 request.go:629] Waited for 196.195898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.697214   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:07.697227   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.697237   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.697246   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.702892   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:25:07.704091   27491 pod_ready.go:92] pod "kube-proxy-dsjx4" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:07.704113   27491 pod_ready.go:81] duration metric: took 403.050172ms for pod "kube-proxy-dsjx4" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.704127   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8svv" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:07.897040   27491 request.go:629] Waited for 192.804717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h8svv
	I0311 20:25:07.897099   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h8svv
	I0311 20:25:07.897107   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:07.897121   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:07.897131   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:07.901240   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:08.096501   27491 request.go:629] Waited for 194.354639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:08.096563   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:08.096571   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.096578   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.096590   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.100062   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:08.100938   27491 pod_ready.go:92] pod "kube-proxy-h8svv" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:08.100959   27491 pod_ready.go:81] duration metric: took 396.822704ms for pod "kube-proxy-h8svv" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.100976   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.296993   27491 request.go:629] Waited for 195.933071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040
	I0311 20:25:08.297047   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040
	I0311 20:25:08.297052   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.297058   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.297063   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.300456   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:08.496512   27491 request.go:629] Waited for 195.342547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:08.496582   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:25:08.496593   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.496603   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.496610   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.499946   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:08.500743   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:08.500757   27491 pod_ready.go:81] duration metric: took 399.770972ms for pod "kube-scheduler-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.500766   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.696950   27491 request.go:629] Waited for 196.133275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m02
	I0311 20:25:08.697046   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m02
	I0311 20:25:08.697055   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.697062   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.697067   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.701016   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:08.897099   27491 request.go:629] Waited for 195.338584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:08.897157   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:25:08.897164   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.897176   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.897186   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.901688   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:08.902660   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:25:08.902678   27491 pod_ready.go:81] duration metric: took 401.905871ms for pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:25:08.902691   27491 pod_ready.go:38] duration metric: took 6.801589621s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:25:08.902712   27491 api_server.go:52] waiting for apiserver process to appear ...
	I0311 20:25:08.902774   27491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:25:08.921063   27491 api_server.go:72] duration metric: took 12.17911372s to wait for apiserver process to appear ...
	I0311 20:25:08.921085   27491 api_server.go:88] waiting for apiserver healthz status ...
	I0311 20:25:08.921103   27491 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0311 20:25:08.925702   27491 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0311 20:25:08.925785   27491 round_trippers.go:463] GET https://192.168.39.128:8443/version
	I0311 20:25:08.925797   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:08.925806   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:08.925816   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:08.926901   27491 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0311 20:25:08.927075   27491 api_server.go:141] control plane version: v1.28.4
	I0311 20:25:08.927093   27491 api_server.go:131] duration metric: took 6.003215ms to wait for apiserver health ...
	I0311 20:25:08.927100   27491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 20:25:09.096448   27491 request.go:629] Waited for 169.296838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:09.096511   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:09.096516   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:09.096523   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:09.096526   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:09.101542   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:25:09.106192   27491 system_pods.go:59] 17 kube-system pods found
	I0311 20:25:09.106227   27491 system_pods.go:61] "coredns-5dd5756b68-d6f2x" [ddc7bef4-f6c5-442f-8149-e52a1822986d] Running
	I0311 20:25:09.106233   27491 system_pods.go:61] "coredns-5dd5756b68-kq47h" [f2a70553-206f-4d11-b32f-01ddd30db8ec] Running
	I0311 20:25:09.106236   27491 system_pods.go:61] "etcd-ha-834040" [76aef9d7-e8f7-4675-92db-614a3723f8b0] Running
	I0311 20:25:09.106239   27491 system_pods.go:61] "etcd-ha-834040-m02" [c87b59c2-5dcd-4217-9d64-1eab2ecf0075] Running
	I0311 20:25:09.106243   27491 system_pods.go:61] "kindnet-bw656" [edb13135-e5b5-46df-922e-5ebfb444c219] Running
	I0311 20:25:09.106247   27491 system_pods.go:61] "kindnet-rqcq6" [7c368ac4-0fa3-4185-98a7-40df481939ee] Running
	I0311 20:25:09.106259   27491 system_pods.go:61] "kube-apiserver-ha-834040" [f1a21652-f5f0-4ff4-a181-9719fbb72320] Running
	I0311 20:25:09.106264   27491 system_pods.go:61] "kube-apiserver-ha-834040-m02" [eaadd58d-4c00-4dd8-94fe-2d28bed895f5] Running
	I0311 20:25:09.106269   27491 system_pods.go:61] "kube-controller-manager-ha-834040" [48fff24f-f490-4cad-ae02-67dd35208820] Running
	I0311 20:25:09.106274   27491 system_pods.go:61] "kube-controller-manager-ha-834040-m02" [a3418676-a178-4f18-accd-cbc835234b6f] Running
	I0311 20:25:09.106279   27491 system_pods.go:61] "kube-proxy-dsjx4" [b8dccd4a-d900-4c56-8861-4c19dbda4a31] Running
	I0311 20:25:09.106286   27491 system_pods.go:61] "kube-proxy-h8svv" [3a7973ca-9a35-4190-8845-cc685619b093] Running
	I0311 20:25:09.106291   27491 system_pods.go:61] "kube-scheduler-ha-834040" [665bbcfc-d34c-46f7-8c3c-73380466fb35] Running
	I0311 20:25:09.106296   27491 system_pods.go:61] "kube-scheduler-ha-834040-m02" [3429847c-a119-4dba-bcfc-f41e6bd8b351] Running
	I0311 20:25:09.106300   27491 system_pods.go:61] "kube-vip-ha-834040" [d539e386-31f6-4b7c-9e36-8a413b82a4a8] Running
	I0311 20:25:09.106304   27491 system_pods.go:61] "kube-vip-ha-834040-m02" [59d64aa5-94ab-44d5-a42e-5453eb2c0b37] Running
	I0311 20:25:09.106307   27491 system_pods.go:61] "storage-provisioner" [bbc64228-86a0-4e0c-9eef-f4644439ca13] Running
	I0311 20:25:09.106312   27491 system_pods.go:74] duration metric: took 179.207071ms to wait for pod list to return data ...
	I0311 20:25:09.106320   27491 default_sa.go:34] waiting for default service account to be created ...
	I0311 20:25:09.296703   27491 request.go:629] Waited for 190.328936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0311 20:25:09.296767   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0311 20:25:09.296773   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:09.296780   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:09.296784   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:09.300442   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:09.300682   27491 default_sa.go:45] found service account: "default"
	I0311 20:25:09.300698   27491 default_sa.go:55] duration metric: took 194.373229ms for default service account to be created ...
	I0311 20:25:09.300706   27491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 20:25:09.496803   27491 request.go:629] Waited for 196.035335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:09.496882   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:25:09.496889   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:09.496897   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:09.496906   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:09.502227   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:25:09.507523   27491 system_pods.go:86] 17 kube-system pods found
	I0311 20:25:09.507544   27491 system_pods.go:89] "coredns-5dd5756b68-d6f2x" [ddc7bef4-f6c5-442f-8149-e52a1822986d] Running
	I0311 20:25:09.507550   27491 system_pods.go:89] "coredns-5dd5756b68-kq47h" [f2a70553-206f-4d11-b32f-01ddd30db8ec] Running
	I0311 20:25:09.507556   27491 system_pods.go:89] "etcd-ha-834040" [76aef9d7-e8f7-4675-92db-614a3723f8b0] Running
	I0311 20:25:09.507566   27491 system_pods.go:89] "etcd-ha-834040-m02" [c87b59c2-5dcd-4217-9d64-1eab2ecf0075] Running
	I0311 20:25:09.507576   27491 system_pods.go:89] "kindnet-bw656" [edb13135-e5b5-46df-922e-5ebfb444c219] Running
	I0311 20:25:09.507584   27491 system_pods.go:89] "kindnet-rqcq6" [7c368ac4-0fa3-4185-98a7-40df481939ee] Running
	I0311 20:25:09.507594   27491 system_pods.go:89] "kube-apiserver-ha-834040" [f1a21652-f5f0-4ff4-a181-9719fbb72320] Running
	I0311 20:25:09.507603   27491 system_pods.go:89] "kube-apiserver-ha-834040-m02" [eaadd58d-4c00-4dd8-94fe-2d28bed895f5] Running
	I0311 20:25:09.507609   27491 system_pods.go:89] "kube-controller-manager-ha-834040" [48fff24f-f490-4cad-ae02-67dd35208820] Running
	I0311 20:25:09.507618   27491 system_pods.go:89] "kube-controller-manager-ha-834040-m02" [a3418676-a178-4f18-accd-cbc835234b6f] Running
	I0311 20:25:09.507625   27491 system_pods.go:89] "kube-proxy-dsjx4" [b8dccd4a-d900-4c56-8861-4c19dbda4a31] Running
	I0311 20:25:09.507635   27491 system_pods.go:89] "kube-proxy-h8svv" [3a7973ca-9a35-4190-8845-cc685619b093] Running
	I0311 20:25:09.507643   27491 system_pods.go:89] "kube-scheduler-ha-834040" [665bbcfc-d34c-46f7-8c3c-73380466fb35] Running
	I0311 20:25:09.507652   27491 system_pods.go:89] "kube-scheduler-ha-834040-m02" [3429847c-a119-4dba-bcfc-f41e6bd8b351] Running
	I0311 20:25:09.507661   27491 system_pods.go:89] "kube-vip-ha-834040" [d539e386-31f6-4b7c-9e36-8a413b82a4a8] Running
	I0311 20:25:09.507667   27491 system_pods.go:89] "kube-vip-ha-834040-m02" [59d64aa5-94ab-44d5-a42e-5453eb2c0b37] Running
	I0311 20:25:09.507675   27491 system_pods.go:89] "storage-provisioner" [bbc64228-86a0-4e0c-9eef-f4644439ca13] Running
	I0311 20:25:09.507688   27491 system_pods.go:126] duration metric: took 206.972856ms to wait for k8s-apps to be running ...
	I0311 20:25:09.507701   27491 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 20:25:09.507747   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:25:09.523611   27491 system_svc.go:56] duration metric: took 15.904633ms WaitForService to wait for kubelet
	I0311 20:25:09.523637   27491 kubeadm.go:576] duration metric: took 12.781689138s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:25:09.523657   27491 node_conditions.go:102] verifying NodePressure condition ...
	I0311 20:25:09.697062   27491 request.go:629] Waited for 173.340368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes
	I0311 20:25:09.697126   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes
	I0311 20:25:09.697131   27491 round_trippers.go:469] Request Headers:
	I0311 20:25:09.697139   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:25:09.697147   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:25:09.700420   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:25:09.702981   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:25:09.703003   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:25:09.703012   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:25:09.703016   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:25:09.703020   27491 node_conditions.go:105] duration metric: took 179.357298ms to run NodePressure ...
	I0311 20:25:09.703029   27491 start.go:240] waiting for startup goroutines ...
	I0311 20:25:09.703052   27491 start.go:254] writing updated cluster config ...
	I0311 20:25:09.705371   27491 out.go:177] 
	I0311 20:25:09.706745   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:25:09.706832   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:25:09.708578   27491 out.go:177] * Starting "ha-834040-m03" control-plane node in "ha-834040" cluster
	I0311 20:25:09.710147   27491 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:25:09.710167   27491 cache.go:56] Caching tarball of preloaded images
	I0311 20:25:09.710272   27491 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:25:09.710295   27491 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:25:09.710404   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:25:09.710663   27491 start.go:360] acquireMachinesLock for ha-834040-m03: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:25:09.710721   27491 start.go:364] duration metric: took 29.271µs to acquireMachinesLock for "ha-834040-m03"
	I0311 20:25:09.710746   27491 start.go:93] Provisioning new machine with config: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:25:09.710873   27491 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0311 20:25:09.712644   27491 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 20:25:09.712725   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:25:09.712785   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:25:09.729319   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46573
	I0311 20:25:09.729650   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:25:09.730134   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:25:09.730163   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:25:09.730525   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:25:09.730708   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetMachineName
	I0311 20:25:09.730891   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:09.731027   27491 start.go:159] libmachine.API.Create for "ha-834040" (driver="kvm2")
	I0311 20:25:09.731063   27491 client.go:168] LocalClient.Create starting
	I0311 20:25:09.731090   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 20:25:09.731119   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:25:09.731134   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:25:09.731182   27491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 20:25:09.731200   27491 main.go:141] libmachine: Decoding PEM data...
	I0311 20:25:09.731212   27491 main.go:141] libmachine: Parsing certificate...
	I0311 20:25:09.731228   27491 main.go:141] libmachine: Running pre-create checks...
	I0311 20:25:09.731236   27491 main.go:141] libmachine: (ha-834040-m03) Calling .PreCreateCheck
	I0311 20:25:09.731356   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetConfigRaw
	I0311 20:25:09.731729   27491 main.go:141] libmachine: Creating machine...
	I0311 20:25:09.731742   27491 main.go:141] libmachine: (ha-834040-m03) Calling .Create
	I0311 20:25:09.731850   27491 main.go:141] libmachine: (ha-834040-m03) Creating KVM machine...
	I0311 20:25:09.733124   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found existing default KVM network
	I0311 20:25:09.733298   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found existing private KVM network mk-ha-834040
	I0311 20:25:09.733443   27491 main.go:141] libmachine: (ha-834040-m03) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03 ...
	I0311 20:25:09.733468   27491 main.go:141] libmachine: (ha-834040-m03) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 20:25:09.733518   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:09.733421   28175 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:25:09.733577   27491 main.go:141] libmachine: (ha-834040-m03) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 20:25:09.954288   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:09.954184   28175 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa...
	I0311 20:25:10.124677   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:10.124565   28175 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/ha-834040-m03.rawdisk...
	I0311 20:25:10.124705   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Writing magic tar header
	I0311 20:25:10.124715   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Writing SSH key tar header
	I0311 20:25:10.124726   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:10.124666   28175 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03 ...
	I0311 20:25:10.124821   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03
	I0311 20:25:10.124844   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 20:25:10.124857   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03 (perms=drwx------)
	I0311 20:25:10.124871   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 20:25:10.124883   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 20:25:10.124898   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:25:10.124918   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 20:25:10.124932   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 20:25:10.124947   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 20:25:10.124960   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 20:25:10.124970   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home/jenkins
	I0311 20:25:10.124986   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Checking permissions on dir: /home
	I0311 20:25:10.124999   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Skipping /home - not owner
	I0311 20:25:10.125012   27491 main.go:141] libmachine: (ha-834040-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 20:25:10.125027   27491 main.go:141] libmachine: (ha-834040-m03) Creating domain...
	I0311 20:25:10.125863   27491 main.go:141] libmachine: (ha-834040-m03) define libvirt domain using xml: 
	I0311 20:25:10.125890   27491 main.go:141] libmachine: (ha-834040-m03) <domain type='kvm'>
	I0311 20:25:10.125902   27491 main.go:141] libmachine: (ha-834040-m03)   <name>ha-834040-m03</name>
	I0311 20:25:10.125915   27491 main.go:141] libmachine: (ha-834040-m03)   <memory unit='MiB'>2200</memory>
	I0311 20:25:10.125928   27491 main.go:141] libmachine: (ha-834040-m03)   <vcpu>2</vcpu>
	I0311 20:25:10.125935   27491 main.go:141] libmachine: (ha-834040-m03)   <features>
	I0311 20:25:10.125945   27491 main.go:141] libmachine: (ha-834040-m03)     <acpi/>
	I0311 20:25:10.125956   27491 main.go:141] libmachine: (ha-834040-m03)     <apic/>
	I0311 20:25:10.125967   27491 main.go:141] libmachine: (ha-834040-m03)     <pae/>
	I0311 20:25:10.125978   27491 main.go:141] libmachine: (ha-834040-m03)     
	I0311 20:25:10.125989   27491 main.go:141] libmachine: (ha-834040-m03)   </features>
	I0311 20:25:10.126003   27491 main.go:141] libmachine: (ha-834040-m03)   <cpu mode='host-passthrough'>
	I0311 20:25:10.126012   27491 main.go:141] libmachine: (ha-834040-m03)   
	I0311 20:25:10.126020   27491 main.go:141] libmachine: (ha-834040-m03)   </cpu>
	I0311 20:25:10.126033   27491 main.go:141] libmachine: (ha-834040-m03)   <os>
	I0311 20:25:10.126045   27491 main.go:141] libmachine: (ha-834040-m03)     <type>hvm</type>
	I0311 20:25:10.126059   27491 main.go:141] libmachine: (ha-834040-m03)     <boot dev='cdrom'/>
	I0311 20:25:10.126070   27491 main.go:141] libmachine: (ha-834040-m03)     <boot dev='hd'/>
	I0311 20:25:10.126094   27491 main.go:141] libmachine: (ha-834040-m03)     <bootmenu enable='no'/>
	I0311 20:25:10.126118   27491 main.go:141] libmachine: (ha-834040-m03)   </os>
	I0311 20:25:10.126132   27491 main.go:141] libmachine: (ha-834040-m03)   <devices>
	I0311 20:25:10.126148   27491 main.go:141] libmachine: (ha-834040-m03)     <disk type='file' device='cdrom'>
	I0311 20:25:10.126166   27491 main.go:141] libmachine: (ha-834040-m03)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/boot2docker.iso'/>
	I0311 20:25:10.126175   27491 main.go:141] libmachine: (ha-834040-m03)       <target dev='hdc' bus='scsi'/>
	I0311 20:25:10.126186   27491 main.go:141] libmachine: (ha-834040-m03)       <readonly/>
	I0311 20:25:10.126195   27491 main.go:141] libmachine: (ha-834040-m03)     </disk>
	I0311 20:25:10.126205   27491 main.go:141] libmachine: (ha-834040-m03)     <disk type='file' device='disk'>
	I0311 20:25:10.126218   27491 main.go:141] libmachine: (ha-834040-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 20:25:10.126238   27491 main.go:141] libmachine: (ha-834040-m03)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/ha-834040-m03.rawdisk'/>
	I0311 20:25:10.126253   27491 main.go:141] libmachine: (ha-834040-m03)       <target dev='hda' bus='virtio'/>
	I0311 20:25:10.126265   27491 main.go:141] libmachine: (ha-834040-m03)     </disk>
	I0311 20:25:10.126276   27491 main.go:141] libmachine: (ha-834040-m03)     <interface type='network'>
	I0311 20:25:10.126288   27491 main.go:141] libmachine: (ha-834040-m03)       <source network='mk-ha-834040'/>
	I0311 20:25:10.126299   27491 main.go:141] libmachine: (ha-834040-m03)       <model type='virtio'/>
	I0311 20:25:10.126317   27491 main.go:141] libmachine: (ha-834040-m03)     </interface>
	I0311 20:25:10.126330   27491 main.go:141] libmachine: (ha-834040-m03)     <interface type='network'>
	I0311 20:25:10.126341   27491 main.go:141] libmachine: (ha-834040-m03)       <source network='default'/>
	I0311 20:25:10.126350   27491 main.go:141] libmachine: (ha-834040-m03)       <model type='virtio'/>
	I0311 20:25:10.126360   27491 main.go:141] libmachine: (ha-834040-m03)     </interface>
	I0311 20:25:10.126369   27491 main.go:141] libmachine: (ha-834040-m03)     <serial type='pty'>
	I0311 20:25:10.126379   27491 main.go:141] libmachine: (ha-834040-m03)       <target port='0'/>
	I0311 20:25:10.126387   27491 main.go:141] libmachine: (ha-834040-m03)     </serial>
	I0311 20:25:10.126398   27491 main.go:141] libmachine: (ha-834040-m03)     <console type='pty'>
	I0311 20:25:10.126411   27491 main.go:141] libmachine: (ha-834040-m03)       <target type='serial' port='0'/>
	I0311 20:25:10.126425   27491 main.go:141] libmachine: (ha-834040-m03)     </console>
	I0311 20:25:10.126438   27491 main.go:141] libmachine: (ha-834040-m03)     <rng model='virtio'>
	I0311 20:25:10.126449   27491 main.go:141] libmachine: (ha-834040-m03)       <backend model='random'>/dev/random</backend>
	I0311 20:25:10.126461   27491 main.go:141] libmachine: (ha-834040-m03)     </rng>
	I0311 20:25:10.126468   27491 main.go:141] libmachine: (ha-834040-m03)     
	I0311 20:25:10.126479   27491 main.go:141] libmachine: (ha-834040-m03)     
	I0311 20:25:10.126494   27491 main.go:141] libmachine: (ha-834040-m03)   </devices>
	I0311 20:25:10.126504   27491 main.go:141] libmachine: (ha-834040-m03) </domain>
	I0311 20:25:10.126514   27491 main.go:141] libmachine: (ha-834040-m03) 
	I0311 20:25:10.133010   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:23:1f:55 in network default
	I0311 20:25:10.133685   27491 main.go:141] libmachine: (ha-834040-m03) Ensuring networks are active...
	I0311 20:25:10.133713   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:10.134445   27491 main.go:141] libmachine: (ha-834040-m03) Ensuring network default is active
	I0311 20:25:10.134720   27491 main.go:141] libmachine: (ha-834040-m03) Ensuring network mk-ha-834040 is active
	I0311 20:25:10.135111   27491 main.go:141] libmachine: (ha-834040-m03) Getting domain xml...
	I0311 20:25:10.135810   27491 main.go:141] libmachine: (ha-834040-m03) Creating domain...
	I0311 20:25:11.341583   27491 main.go:141] libmachine: (ha-834040-m03) Waiting to get IP...
	I0311 20:25:11.343518   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:11.344048   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:11.344079   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:11.344006   28175 retry.go:31] will retry after 213.574303ms: waiting for machine to come up
	I0311 20:25:11.559415   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:11.559785   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:11.559812   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:11.559746   28175 retry.go:31] will retry after 252.339913ms: waiting for machine to come up
	I0311 20:25:11.814155   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:11.814612   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:11.814639   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:11.814562   28175 retry.go:31] will retry after 325.721227ms: waiting for machine to come up
	I0311 20:25:12.142249   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:12.142702   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:12.142731   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:12.142656   28175 retry.go:31] will retry after 552.651246ms: waiting for machine to come up
	I0311 20:25:12.697337   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:12.697772   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:12.697806   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:12.697727   28175 retry.go:31] will retry after 695.62001ms: waiting for machine to come up
	I0311 20:25:13.394518   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:13.394985   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:13.395014   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:13.394946   28175 retry.go:31] will retry after 742.694244ms: waiting for machine to come up
	I0311 20:25:14.139131   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:14.139524   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:14.139550   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:14.139483   28175 retry.go:31] will retry after 834.612641ms: waiting for machine to come up
	I0311 20:25:14.975514   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:14.976013   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:14.976039   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:14.975960   28175 retry.go:31] will retry after 1.136028207s: waiting for machine to come up
	I0311 20:25:16.113828   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:16.114350   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:16.114381   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:16.114284   28175 retry.go:31] will retry after 1.503117438s: waiting for machine to come up
	I0311 20:25:17.618499   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:17.618941   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:17.618964   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:17.618902   28175 retry.go:31] will retry after 1.502353682s: waiting for machine to come up
	I0311 20:25:19.122494   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:19.122914   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:19.122945   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:19.122867   28175 retry.go:31] will retry after 2.128080831s: waiting for machine to come up
	I0311 20:25:21.253320   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:21.253755   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:21.253777   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:21.253713   28175 retry.go:31] will retry after 3.478671111s: waiting for machine to come up
	I0311 20:25:24.733738   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:24.734197   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:24.734222   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:24.734159   28175 retry.go:31] will retry after 3.215581774s: waiting for machine to come up
	I0311 20:25:27.951029   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:27.951466   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find current IP address of domain ha-834040-m03 in network mk-ha-834040
	I0311 20:25:27.951493   27491 main.go:141] libmachine: (ha-834040-m03) DBG | I0311 20:25:27.951432   28175 retry.go:31] will retry after 3.808616946s: waiting for machine to come up
	I0311 20:25:31.762631   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.763124   27491 main.go:141] libmachine: (ha-834040-m03) Found IP for machine: 192.168.39.40
	I0311 20:25:31.763158   27491 main.go:141] libmachine: (ha-834040-m03) Reserving static IP address...
	I0311 20:25:31.763171   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has current primary IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.763584   27491 main.go:141] libmachine: (ha-834040-m03) DBG | unable to find host DHCP lease matching {name: "ha-834040-m03", mac: "52:54:00:93:84:f9", ip: "192.168.39.40"} in network mk-ha-834040
	I0311 20:25:31.833638   27491 main.go:141] libmachine: (ha-834040-m03) Reserved static IP address: 192.168.39.40
	I0311 20:25:31.833672   27491 main.go:141] libmachine: (ha-834040-m03) Waiting for SSH to be available...
	I0311 20:25:31.833682   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Getting to WaitForSSH function...
	I0311 20:25:31.836221   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.836645   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:31.836677   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.836871   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Using SSH client type: external
	I0311 20:25:31.836894   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa (-rw-------)
	I0311 20:25:31.836926   27491 main.go:141] libmachine: (ha-834040-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 20:25:31.836939   27491 main.go:141] libmachine: (ha-834040-m03) DBG | About to run SSH command:
	I0311 20:25:31.836956   27491 main.go:141] libmachine: (ha-834040-m03) DBG | exit 0
	I0311 20:25:31.972823   27491 main.go:141] libmachine: (ha-834040-m03) DBG | SSH cmd err, output: <nil>: 
	I0311 20:25:31.973077   27491 main.go:141] libmachine: (ha-834040-m03) KVM machine creation complete!
	I0311 20:25:31.973365   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetConfigRaw
	I0311 20:25:31.973930   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:31.974126   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:31.974301   27491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 20:25:31.974318   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:25:31.975522   27491 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 20:25:31.975537   27491 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 20:25:31.975543   27491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 20:25:31.975551   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:31.977802   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.978213   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:31.978247   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:31.978340   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:31.978518   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:31.978692   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:31.978814   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:31.978986   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:31.979209   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:31.979221   27491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 20:25:32.100330   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:25:32.100357   27491 main.go:141] libmachine: Detecting the provisioner...
	I0311 20:25:32.100369   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.103119   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.103466   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.103502   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.103649   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.103850   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.104024   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.104186   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.104345   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:32.104545   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:32.104559   27491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 20:25:32.222259   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 20:25:32.222332   27491 main.go:141] libmachine: found compatible host: buildroot
	I0311 20:25:32.222339   27491 main.go:141] libmachine: Provisioning with buildroot...
	I0311 20:25:32.222347   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetMachineName
	I0311 20:25:32.222546   27491 buildroot.go:166] provisioning hostname "ha-834040-m03"
	I0311 20:25:32.222569   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetMachineName
	I0311 20:25:32.222758   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.225217   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.225618   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.225649   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.225774   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.225956   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.226105   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.226250   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.226411   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:32.226570   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:32.226586   27491 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-834040-m03 && echo "ha-834040-m03" | sudo tee /etc/hostname
	I0311 20:25:32.361275   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040-m03
	
	I0311 20:25:32.361301   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.363908   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.364271   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.364298   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.364536   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.364700   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.364877   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.365044   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.365218   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:32.365393   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:32.365418   27491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-834040-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-834040-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-834040-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:25:32.492045   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:25:32.492074   27491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:25:32.492093   27491 buildroot.go:174] setting up certificates
	I0311 20:25:32.492105   27491 provision.go:84] configureAuth start
	I0311 20:25:32.492117   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetMachineName
	I0311 20:25:32.492383   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:25:32.494867   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.495252   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.495281   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.495391   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.497440   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.497782   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.497805   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.497942   27491 provision.go:143] copyHostCerts
	I0311 20:25:32.497964   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:25:32.497990   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:25:32.497999   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:25:32.498060   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:25:32.498140   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:25:32.498158   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:25:32.498164   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:25:32.498186   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:25:32.498238   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:25:32.498255   27491 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:25:32.498261   27491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:25:32.498283   27491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:25:32.498334   27491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.ha-834040-m03 san=[127.0.0.1 192.168.39.40 ha-834040-m03 localhost minikube]
	I0311 20:25:32.678172   27491 provision.go:177] copyRemoteCerts
	I0311 20:25:32.678231   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:25:32.678253   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.680841   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.681160   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.681187   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.681359   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.681533   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.681682   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.681810   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:25:32.773481   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:25:32.773541   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:25:32.803215   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:25:32.803282   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 20:25:32.834068   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:25:32.834146   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 20:25:32.864236   27491 provision.go:87] duration metric: took 372.118438ms to configureAuth
	I0311 20:25:32.864261   27491 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:25:32.864512   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:25:32.864611   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:32.867260   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.867628   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:32.867648   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:32.867855   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:32.868051   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.868235   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:32.868397   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:32.868558   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:32.868715   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:32.868729   27491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:25:33.157772   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:25:33.157796   27491 main.go:141] libmachine: Checking connection to Docker...
	I0311 20:25:33.157804   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetURL
	I0311 20:25:33.159064   27491 main.go:141] libmachine: (ha-834040-m03) DBG | Using libvirt version 6000000
	I0311 20:25:33.161808   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.162234   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.162263   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.162558   27491 main.go:141] libmachine: Docker is up and running!
	I0311 20:25:33.162578   27491 main.go:141] libmachine: Reticulating splines...
	I0311 20:25:33.162586   27491 client.go:171] duration metric: took 23.431512987s to LocalClient.Create
	I0311 20:25:33.162610   27491 start.go:167] duration metric: took 23.431583694s to libmachine.API.Create "ha-834040"
	I0311 20:25:33.162623   27491 start.go:293] postStartSetup for "ha-834040-m03" (driver="kvm2")
	I0311 20:25:33.162636   27491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:25:33.162656   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.162886   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:25:33.162912   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:33.165322   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.165672   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.165694   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.165820   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:33.166000   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.166161   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:33.166295   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:25:33.257602   27491 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:25:33.262493   27491 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:25:33.262519   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:25:33.262590   27491 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:25:33.262663   27491 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:25:33.262675   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:25:33.262748   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:25:33.273859   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:25:33.300785   27491 start.go:296] duration metric: took 138.149269ms for postStartSetup
	I0311 20:25:33.300838   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetConfigRaw
	I0311 20:25:33.301361   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:25:33.304190   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.304574   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.304606   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.304935   27491 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:25:33.305148   27491 start.go:128] duration metric: took 23.594261602s to createHost
	I0311 20:25:33.305169   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:33.307510   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.307859   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.307881   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.308017   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:33.308197   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.308338   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.308436   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:33.308553   27491 main.go:141] libmachine: Using SSH client type: native
	I0311 20:25:33.308760   27491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0311 20:25:33.308774   27491 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:25:33.425866   27491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710188733.401747367
	
	I0311 20:25:33.425888   27491 fix.go:216] guest clock: 1710188733.401747367
	I0311 20:25:33.425895   27491 fix.go:229] Guest: 2024-03-11 20:25:33.401747367 +0000 UTC Remote: 2024-03-11 20:25:33.305158733 +0000 UTC m=+167.994746101 (delta=96.588634ms)
	I0311 20:25:33.425910   27491 fix.go:200] guest clock delta is within tolerance: 96.588634ms
	I0311 20:25:33.425917   27491 start.go:83] releasing machines lock for "ha-834040-m03", held for 23.715182973s
	I0311 20:25:33.425939   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.426192   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:25:33.428684   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.429057   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.429076   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.432079   27491 out.go:177] * Found network options:
	I0311 20:25:33.433502   27491 out.go:177]   - NO_PROXY=192.168.39.128,192.168.39.101
	W0311 20:25:33.434677   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	W0311 20:25:33.434695   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 20:25:33.434706   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.435241   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.435411   27491 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:25:33.435498   27491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:25:33.435531   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	W0311 20:25:33.435607   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	W0311 20:25:33.435627   27491 proxy.go:119] fail to check proxy env: Error ip not in block
	I0311 20:25:33.435696   27491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:25:33.435713   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:25:33.438098   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.438247   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.438526   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.438552   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.438619   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:33.438619   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:33.438640   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:33.438784   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.438808   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:25:33.438996   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:33.439004   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:25:33.439148   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:25:33.439181   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:25:33.439249   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:25:33.682921   27491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:25:33.690091   27491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:25:33.690155   27491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:25:33.707619   27491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 20:25:33.707642   27491 start.go:494] detecting cgroup driver to use...
	I0311 20:25:33.707704   27491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:25:33.730253   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:25:33.745202   27491 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:25:33.745254   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:25:33.760286   27491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:25:33.779199   27491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:25:33.918971   27491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:25:34.100404   27491 docker.go:233] disabling docker service ...
	I0311 20:25:34.100476   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:25:34.117814   27491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:25:34.131823   27491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:25:34.257499   27491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:25:34.385437   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:25:34.402125   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:25:34.423643   27491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:25:34.423703   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:25:34.435924   27491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:25:34.435973   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:25:34.448545   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:25:34.460316   27491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:25:34.473287   27491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:25:34.489367   27491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:25:34.504068   27491 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 20:25:34.504105   27491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 20:25:34.518731   27491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:25:34.530120   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:25:34.667158   27491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:25:34.824768   27491 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:25:34.824851   27491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:25:34.830068   27491 start.go:562] Will wait 60s for crictl version
	I0311 20:25:34.830116   27491 ssh_runner.go:195] Run: which crictl
	I0311 20:25:34.834225   27491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:25:34.875789   27491 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:25:34.875859   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:25:34.909125   27491 ssh_runner.go:195] Run: crio --version
	I0311 20:25:34.941559   27491 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:25:34.942873   27491 out.go:177]   - env NO_PROXY=192.168.39.128
	I0311 20:25:34.944088   27491 out.go:177]   - env NO_PROXY=192.168.39.128,192.168.39.101
	I0311 20:25:34.945212   27491 main.go:141] libmachine: (ha-834040-m03) Calling .GetIP
	I0311 20:25:34.947834   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:34.948205   27491 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:25:34.948230   27491 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:25:34.948417   27491 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:25:34.952855   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:25:34.967756   27491 mustload.go:65] Loading cluster: ha-834040
	I0311 20:25:34.967978   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:25:34.968213   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:25:34.968246   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:25:34.985591   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0311 20:25:34.986032   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:25:34.986556   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:25:34.986575   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:25:34.986863   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:25:34.987042   27491 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:25:34.988397   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:25:34.988694   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:25:34.988728   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:25:35.002546   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0311 20:25:35.002979   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:25:35.003368   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:25:35.003392   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:25:35.003694   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:25:35.003879   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:25:35.004031   27491 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040 for IP: 192.168.39.40
	I0311 20:25:35.004044   27491 certs.go:194] generating shared ca certs ...
	I0311 20:25:35.004059   27491 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:25:35.004156   27491 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:25:35.004203   27491 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:25:35.004213   27491 certs.go:256] generating profile certs ...
	I0311 20:25:35.004278   27491 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key
	I0311 20:25:35.004301   27491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.78064c2e
	I0311 20:25:35.004314   27491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.78064c2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.101 192.168.39.40 192.168.39.254]
	I0311 20:25:35.065803   27491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.78064c2e ...
	I0311 20:25:35.065827   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.78064c2e: {Name:mkbcf692f53b531dbeecd9b17696ae18bbdb46c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:25:35.065977   27491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.78064c2e ...
	I0311 20:25:35.065988   27491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.78064c2e: {Name:mk5766d5ea000b5e91e4f884a481b9bb80e2abe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:25:35.066059   27491 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.78064c2e -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt
	I0311 20:25:35.066178   27491 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.78064c2e -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key
	I0311 20:25:35.066294   27491 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key
	I0311 20:25:35.066308   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:25:35.066320   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:25:35.066330   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:25:35.066347   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:25:35.066359   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:25:35.066370   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:25:35.066379   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:25:35.066389   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:25:35.066430   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:25:35.066456   27491 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:25:35.066465   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:25:35.066486   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:25:35.066506   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:25:35.066527   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:25:35.066565   27491 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:25:35.066592   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:25:35.066606   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:25:35.066619   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:25:35.066648   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:25:35.069587   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:25:35.069930   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:25:35.069948   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:25:35.070097   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:25:35.070268   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:25:35.070448   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:25:35.070609   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:25:35.149002   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0311 20:25:35.154911   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0311 20:25:35.171044   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0311 20:25:35.176214   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0311 20:25:35.188414   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0311 20:25:35.193655   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0311 20:25:35.208849   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0311 20:25:35.213796   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0311 20:25:35.229020   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0311 20:25:35.233951   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0311 20:25:35.246159   27491 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0311 20:25:35.251023   27491 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0311 20:25:35.262706   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:25:35.290538   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:25:35.317175   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:25:35.344091   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:25:35.369653   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0311 20:25:35.397081   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 20:25:35.422620   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:25:35.448035   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:25:35.474624   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:25:35.501661   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:25:35.527034   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:25:35.551653   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0311 20:25:35.571923   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0311 20:25:35.592698   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0311 20:25:35.612110   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0311 20:25:35.630901   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0311 20:25:35.649998   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0311 20:25:35.668152   27491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0311 20:25:35.685805   27491 ssh_runner.go:195] Run: openssl version
	I0311 20:25:35.691628   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:25:35.703675   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:25:35.708403   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:25:35.708444   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:25:35.714336   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:25:35.726282   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:25:35.738193   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:25:35.743029   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:25:35.743071   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:25:35.749065   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:25:35.760967   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:25:35.773661   27491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:25:35.780867   27491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:25:35.780911   27491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:25:35.787005   27491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:25:35.799108   27491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:25:35.803505   27491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 20:25:35.803558   27491 kubeadm.go:928] updating node {m03 192.168.39.40 8443 v1.28.4 crio true true} ...
	I0311 20:25:35.803652   27491 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-834040-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:25:35.803679   27491 kube-vip.go:101] generating kube-vip config ...
	I0311 20:25:35.803702   27491 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 20:25:35.803733   27491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:25:35.814649   27491 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0311 20:25:35.814686   27491 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0311 20:25:35.825551   27491 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0311 20:25:35.825574   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0311 20:25:35.825574   27491 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0311 20:25:35.825552   27491 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0311 20:25:35.825613   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:25:35.825663   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0311 20:25:35.825612   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0311 20:25:35.825730   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0311 20:25:35.845620   27491 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0311 20:25:35.845677   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0311 20:25:35.845689   27491 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0311 20:25:35.845704   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0311 20:25:35.845707   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0311 20:25:35.845723   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0311 20:25:35.855378   27491 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0311 20:25:35.855406   27491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0311 20:25:36.840758   27491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0311 20:25:36.851442   27491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0311 20:25:36.870142   27491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:25:36.888502   27491 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 20:25:36.906199   27491 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0311 20:25:36.910659   27491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 20:25:36.924906   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:25:37.066725   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:25:37.086385   27491 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:25:37.086693   27491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:25:37.086738   27491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:25:37.101795   27491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0311 20:25:37.102198   27491 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:25:37.102682   27491 main.go:141] libmachine: Using API Version  1
	I0311 20:25:37.102707   27491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:25:37.103029   27491 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:25:37.103254   27491 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:25:37.103402   27491 start.go:316] joinCluster: &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:25:37.103519   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0311 20:25:37.103533   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:25:37.106529   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:25:37.107026   27491 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:25:37.107047   27491 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:25:37.107302   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:25:37.107446   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:25:37.107655   27491 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:25:37.107815   27491 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:25:37.273645   27491 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:25:37.273686   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s6hbui.0exotx56n62g6204 --discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-834040-m03 --control-plane --apiserver-advertise-address=192.168.39.40 --apiserver-bind-port=8443"
	I0311 20:26:06.268630   27491 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s6hbui.0exotx56n62g6204 --discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-834040-m03 --control-plane --apiserver-advertise-address=192.168.39.40 --apiserver-bind-port=8443": (28.994918227s)
	I0311 20:26:06.268667   27491 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0311 20:26:07.004010   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-834040-m03 minikube.k8s.io/updated_at=2024_03_11T20_26_07_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=ha-834040 minikube.k8s.io/primary=false
	I0311 20:26:07.126195   27491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-834040-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0311 20:26:07.279907   27491 start.go:318] duration metric: took 30.176500753s to joinCluster
	I0311 20:26:07.279980   27491 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 20:26:07.281633   27491 out.go:177] * Verifying Kubernetes components...
	I0311 20:26:07.280341   27491 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:26:07.283430   27491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:26:07.594824   27491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:26:07.718426   27491 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:26:07.718753   27491 kapi.go:59] client config for ha-834040: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.crt", KeyFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key", CAFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0311 20:26:07.718830   27491 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.128:8443
	I0311 20:26:07.719076   27491 node_ready.go:35] waiting up to 6m0s for node "ha-834040-m03" to be "Ready" ...
	I0311 20:26:07.719161   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:07.719168   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:07.719179   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:07.719185   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:07.723962   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:08.220047   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:08.220074   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:08.220086   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:08.220094   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:08.225265   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:08.720266   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:08.720283   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:08.720292   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:08.720296   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:08.725024   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:09.220016   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:09.220046   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:09.220058   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:09.220063   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:09.224938   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:09.719994   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:09.720015   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:09.720026   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:09.720032   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:09.765396   27491 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0311 20:26:09.766292   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:10.219625   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:10.219646   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:10.219654   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:10.219658   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:10.223885   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:10.719394   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:10.719434   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:10.719445   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:10.719452   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:10.723231   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:11.219580   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:11.219601   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:11.219611   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:11.219618   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:11.225659   27491 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 20:26:11.720255   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:11.720281   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:11.720293   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:11.720299   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:11.725960   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:12.219276   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:12.219297   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:12.219305   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:12.219308   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:12.223437   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:12.224692   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:12.720135   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:12.720156   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:12.720164   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:12.720168   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:12.724331   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:13.220139   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:13.220158   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:13.220166   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:13.220169   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:13.229448   27491 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0311 20:26:13.719433   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:13.719454   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:13.719466   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:13.719471   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:13.723282   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:14.219258   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:14.219280   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:14.219291   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:14.219300   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:14.223301   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:14.719579   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:14.719599   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:14.719606   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:14.719611   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:14.723604   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:14.724332   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:15.219348   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:15.219391   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:15.219399   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:15.219404   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:15.223615   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:15.719330   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:15.719350   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:15.719356   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:15.719361   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:15.722989   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:16.219193   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:16.219240   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:16.219249   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:16.219253   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:16.223516   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:16.719912   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:16.719936   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:16.719947   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:16.719953   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:16.723779   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:16.724752   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:17.220261   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:17.220281   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:17.220297   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:17.220301   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:17.224452   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:17.719389   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:17.719412   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:17.719423   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:17.719435   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:17.724115   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:18.219225   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:18.219249   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:18.219258   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:18.219262   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:18.223541   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:18.720104   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:18.720125   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:18.720132   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:18.720136   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:18.724024   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:18.724801   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:19.220235   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:19.220258   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:19.220267   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:19.220270   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:19.224218   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:19.719704   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:19.719734   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:19.719742   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:19.719745   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:19.723883   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:20.219768   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:20.219792   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:20.219803   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:20.219810   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:20.223736   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:20.719296   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:20.719315   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:20.719321   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:20.719325   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:20.724046   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:20.725418   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:21.219281   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:21.219305   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:21.219315   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:21.219320   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:21.226663   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:21.719320   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:21.719342   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:21.719351   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:21.719355   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:21.723326   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:22.219803   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:22.219832   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:22.219845   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:22.219852   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:22.223851   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:22.720067   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:22.720101   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:22.720109   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:22.720112   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:22.724228   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:23.220166   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:23.220187   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:23.220195   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:23.220198   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:23.224743   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:23.225828   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:23.720264   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:23.720289   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:23.720301   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:23.720306   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:23.726120   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:24.219346   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:24.219366   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:24.219374   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:24.219378   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:24.223215   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:24.719424   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:24.719445   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:24.719453   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:24.719457   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:24.723982   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:25.219845   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:25.219876   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:25.219887   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:25.219893   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:25.223507   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:25.720233   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:25.720255   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:25.720263   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:25.720266   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:25.724248   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:25.724809   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:26.219361   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:26.219380   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:26.219388   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:26.219391   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:26.223068   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:26.720246   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:26.720266   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:26.720274   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:26.720280   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:26.723719   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:27.219260   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:27.219280   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:27.219293   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:27.219298   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:27.223317   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:27.719222   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:27.719241   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:27.719248   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:27.719252   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:27.723485   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:28.220099   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:28.220120   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:28.220128   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:28.220133   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:28.224190   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:28.224714   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:28.720105   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:28.720133   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:28.720144   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:28.720149   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:28.724865   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:29.219976   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:29.220016   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:29.220027   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:29.220032   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:29.223773   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:29.720030   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:29.720057   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:29.720067   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:29.720074   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:29.723785   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:30.219771   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:30.219801   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:30.219812   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:30.219818   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:30.223844   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:30.719425   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:30.719448   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:30.719458   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:30.719463   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:30.722844   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:30.723585   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:31.219297   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:31.219317   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:31.219325   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:31.219329   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:31.223051   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:31.720148   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:31.720172   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:31.720183   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:31.720190   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:31.724043   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:32.219757   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:32.219777   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:32.219785   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:32.219797   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:32.224264   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:32.720293   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:32.720320   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:32.720330   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:32.720335   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:32.724333   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:32.725270   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:33.219489   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:33.219509   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:33.219516   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:33.219520   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:33.223108   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:33.719288   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:33.719316   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:33.719326   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:33.719334   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:33.723411   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:34.219875   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:34.219902   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:34.219919   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:34.219924   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:34.223753   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:34.719432   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:34.719459   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:34.719465   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:34.719469   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:34.724077   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:35.220180   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:35.220201   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:35.220209   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:35.220213   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:35.224161   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:35.224842   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:35.719959   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:35.719979   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:35.719990   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:35.719994   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:35.724132   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:36.219604   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:36.219622   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:36.219630   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:36.219635   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:36.223713   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:36.719620   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:36.719642   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:36.719651   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:36.719655   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:36.723603   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:37.219862   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:37.219882   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:37.219890   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:37.219893   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:37.223874   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:37.719599   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:37.719624   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:37.719631   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:37.719636   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:37.723918   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:37.724593   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:38.219937   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:38.219956   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:38.219964   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:38.219969   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:38.224012   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:38.719979   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:38.720000   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:38.720012   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:38.720017   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:38.724196   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:39.219349   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:39.219367   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:39.219375   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:39.219379   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:39.223180   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:39.720196   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:39.720220   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:39.720230   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:39.720236   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:39.724488   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:39.725288   27491 node_ready.go:53] node "ha-834040-m03" has status "Ready":"False"
	I0311 20:26:40.220253   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:40.220274   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:40.220282   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:40.220285   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:40.223715   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:40.720325   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:40.720350   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:40.720361   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:40.720368   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:40.724675   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:41.219428   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:41.219447   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.219455   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.219460   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.223091   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.720270   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:41.720296   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.720321   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.720325   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.723935   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.724645   27491 node_ready.go:49] node "ha-834040-m03" has status "Ready":"True"
	I0311 20:26:41.724662   27491 node_ready.go:38] duration metric: took 34.00556893s for node "ha-834040-m03" to be "Ready" ...
	I0311 20:26:41.724670   27491 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:26:41.724719   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:41.724729   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.724752   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.724758   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.732339   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:41.738925   27491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.739011   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d6f2x
	I0311 20:26:41.739037   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.739052   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.739061   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.742759   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.743554   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:41.743569   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.743576   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.743579   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.746999   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.747593   27491 pod_ready.go:92] pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:41.747609   27491 pod_ready.go:81] duration metric: took 8.660607ms for pod "coredns-5dd5756b68-d6f2x" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.747620   27491 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.747675   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kq47h
	I0311 20:26:41.747686   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.747694   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.747699   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.751270   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.752107   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:41.752123   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.752129   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.752133   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.755022   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:26:41.755612   27491 pod_ready.go:92] pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:41.755632   27491 pod_ready.go:81] duration metric: took 8.005858ms for pod "coredns-5dd5756b68-kq47h" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.755641   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.755711   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040
	I0311 20:26:41.755723   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.755730   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.755736   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.759060   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.759737   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:41.759750   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.759757   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.759761   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.762560   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:26:41.763231   27491 pod_ready.go:92] pod "etcd-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:41.763245   27491 pod_ready.go:81] duration metric: took 7.591981ms for pod "etcd-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.763253   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.763307   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040-m02
	I0311 20:26:41.763315   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.763322   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.763325   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.766046   27491 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0311 20:26:41.766617   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:41.766629   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.766636   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.766640   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.770196   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:41.770854   27491 pod_ready.go:92] pod "etcd-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:41.770869   27491 pod_ready.go:81] duration metric: took 7.611236ms for pod "etcd-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.770877   27491 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:41.921250   27491 request.go:629] Waited for 150.293497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040-m03
	I0311 20:26:41.921305   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/etcd-ha-834040-m03
	I0311 20:26:41.921312   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:41.921322   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:41.921334   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:41.924892   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:42.120965   27491 request.go:629] Waited for 195.387698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:42.121037   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:42.121043   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.121050   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.121058   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.125315   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:42.125948   27491 pod_ready.go:92] pod "etcd-ha-834040-m03" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:42.125965   27491 pod_ready.go:81] duration metric: took 355.082755ms for pod "etcd-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.125980   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.321051   27491 request.go:629] Waited for 195.010378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040
	I0311 20:26:42.321132   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040
	I0311 20:26:42.321147   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.321157   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.321167   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.325274   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:42.520284   27491 request.go:629] Waited for 194.26956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:42.520347   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:42.520355   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.520374   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.520380   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.523707   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:42.524370   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:42.524389   27491 pod_ready.go:81] duration metric: took 398.40224ms for pod "kube-apiserver-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.524397   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.720350   27491 request.go:629] Waited for 195.861906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:26:42.720402   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m02
	I0311 20:26:42.720408   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.720419   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.720429   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.724397   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:42.920829   27491 request.go:629] Waited for 195.32487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:42.920896   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:42.920907   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:42.920917   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:42.920922   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:42.925609   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:42.926427   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:42.926443   27491 pod_ready.go:81] duration metric: took 402.039947ms for pod "kube-apiserver-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:42.926452   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.120572   27491 request.go:629] Waited for 194.063187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m03
	I0311 20:26:43.120633   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-834040-m03
	I0311 20:26:43.120638   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.120650   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.120653   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.124557   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:43.320910   27491 request.go:629] Waited for 195.361654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:43.320993   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:43.321004   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.321016   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.321025   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.326297   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:43.326735   27491 pod_ready.go:92] pod "kube-apiserver-ha-834040-m03" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:43.326753   27491 pod_ready.go:81] duration metric: took 400.293957ms for pod "kube-apiserver-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.326766   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.520813   27491 request.go:629] Waited for 193.981718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040
	I0311 20:26:43.520893   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040
	I0311 20:26:43.520904   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.520915   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.520925   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.528015   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:43.721119   27491 request.go:629] Waited for 192.380119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:43.721171   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:43.721176   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.721183   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.721187   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.724840   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:43.725550   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:43.725567   27491 pod_ready.go:81] duration metric: took 398.793378ms for pod "kube-controller-manager-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.725580   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:43.920641   27491 request.go:629] Waited for 194.993846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m02
	I0311 20:26:43.920714   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m02
	I0311 20:26:43.920722   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:43.920732   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:43.920757   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:43.924521   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:44.120469   27491 request.go:629] Waited for 195.282306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:44.120544   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:44.120553   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.120583   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.120594   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.123920   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:44.124683   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:44.124700   27491 pod_ready.go:81] duration metric: took 399.1137ms for pod "kube-controller-manager-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.124710   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.320791   27491 request.go:629] Waited for 195.99729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m03
	I0311 20:26:44.320857   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-834040-m03
	I0311 20:26:44.320868   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.320878   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.320882   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.324830   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:44.521112   27491 request.go:629] Waited for 195.375116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:44.521174   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:44.521181   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.521192   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.521202   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.530131   27491 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0311 20:26:44.530711   27491 pod_ready.go:92] pod "kube-controller-manager-ha-834040-m03" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:44.530730   27491 pod_ready.go:81] duration metric: took 406.014637ms for pod "kube-controller-manager-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.530740   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4kkwc" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.720914   27491 request.go:629] Waited for 190.104905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4kkwc
	I0311 20:26:44.720975   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4kkwc
	I0311 20:26:44.720981   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.720988   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.720993   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.725517   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:44.920916   27491 request.go:629] Waited for 194.662377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:44.920962   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:44.920967   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:44.920974   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:44.920981   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:44.924388   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:44.925318   27491 pod_ready.go:92] pod "kube-proxy-4kkwc" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:44.925337   27491 pod_ready.go:81] duration metric: took 394.590294ms for pod "kube-proxy-4kkwc" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:44.925348   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dsjx4" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.121278   27491 request.go:629] Waited for 195.868347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsjx4
	I0311 20:26:45.121350   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsjx4
	I0311 20:26:45.121358   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.121369   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.121385   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.124993   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:45.321251   27491 request.go:629] Waited for 195.371519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:45.321343   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:45.321356   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.321365   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.321370   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.325375   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:45.326090   27491 pod_ready.go:92] pod "kube-proxy-dsjx4" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:45.326111   27491 pod_ready.go:81] duration metric: took 400.753888ms for pod "kube-proxy-dsjx4" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.326120   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8svv" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.520780   27491 request.go:629] Waited for 194.601973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h8svv
	I0311 20:26:45.520871   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h8svv
	I0311 20:26:45.520887   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.520896   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.520905   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.524578   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:45.720570   27491 request.go:629] Waited for 195.34865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:45.720618   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:45.720623   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.720631   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.720636   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.724637   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:45.725490   27491 pod_ready.go:92] pod "kube-proxy-h8svv" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:45.725514   27491 pod_ready.go:81] duration metric: took 399.386613ms for pod "kube-proxy-h8svv" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.725526   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:45.920956   27491 request.go:629] Waited for 195.299264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040
	I0311 20:26:45.921014   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040
	I0311 20:26:45.921022   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:45.921036   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:45.921045   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:45.925603   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:46.120671   27491 request.go:629] Waited for 194.365352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:46.120717   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040
	I0311 20:26:46.120723   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.120729   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.120752   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.127516   27491 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0311 20:26:46.128233   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:46.128254   27491 pod_ready.go:81] duration metric: took 402.720062ms for pod "kube-scheduler-ha-834040" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.128275   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.321318   27491 request.go:629] Waited for 192.96989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m02
	I0311 20:26:46.321368   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m02
	I0311 20:26:46.321374   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.321381   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.321387   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.324797   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:46.520767   27491 request.go:629] Waited for 195.35794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:46.520824   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m02
	I0311 20:26:46.520830   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.520840   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.520849   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.524127   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:46.524788   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:46.524807   27491 pod_ready.go:81] duration metric: took 396.520308ms for pod "kube-scheduler-ha-834040-m02" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.524816   27491 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.720810   27491 request.go:629] Waited for 195.935972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m03
	I0311 20:26:46.720876   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-834040-m03
	I0311 20:26:46.720881   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.720893   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.720901   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.724026   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:46.921238   27491 request.go:629] Waited for 196.348267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:46.921291   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes/ha-834040-m03
	I0311 20:26:46.921296   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.921304   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.921307   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.925055   27491 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0311 20:26:46.925974   27491 pod_ready.go:92] pod "kube-scheduler-ha-834040-m03" in "kube-system" namespace has status "Ready":"True"
	I0311 20:26:46.925996   27491 pod_ready.go:81] duration metric: took 401.172976ms for pod "kube-scheduler-ha-834040-m03" in "kube-system" namespace to be "Ready" ...
	I0311 20:26:46.926009   27491 pod_ready.go:38] duration metric: took 5.201330525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 20:26:46.926023   27491 api_server.go:52] waiting for apiserver process to appear ...
	I0311 20:26:46.926079   27491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:26:46.947106   27491 api_server.go:72] duration metric: took 39.667095801s to wait for apiserver process to appear ...
	I0311 20:26:46.947130   27491 api_server.go:88] waiting for apiserver healthz status ...
	I0311 20:26:46.947149   27491 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0311 20:26:46.954516   27491 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0311 20:26:46.954585   27491 round_trippers.go:463] GET https://192.168.39.128:8443/version
	I0311 20:26:46.954597   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:46.954608   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:46.954621   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:46.955786   27491 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0311 20:26:46.955851   27491 api_server.go:141] control plane version: v1.28.4
	I0311 20:26:46.955868   27491 api_server.go:131] duration metric: took 8.730483ms to wait for apiserver health ...
	I0311 20:26:46.955880   27491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 20:26:47.120555   27491 request.go:629] Waited for 164.597389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:47.120616   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:47.120623   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:47.120633   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:47.120647   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:47.128008   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:47.136078   27491 system_pods.go:59] 24 kube-system pods found
	I0311 20:26:47.136102   27491 system_pods.go:61] "coredns-5dd5756b68-d6f2x" [ddc7bef4-f6c5-442f-8149-e52a1822986d] Running
	I0311 20:26:47.136109   27491 system_pods.go:61] "coredns-5dd5756b68-kq47h" [f2a70553-206f-4d11-b32f-01ddd30db8ec] Running
	I0311 20:26:47.136114   27491 system_pods.go:61] "etcd-ha-834040" [76aef9d7-e8f7-4675-92db-614a3723f8b0] Running
	I0311 20:26:47.136120   27491 system_pods.go:61] "etcd-ha-834040-m02" [c87b59c2-5dcd-4217-9d64-1eab2ecf0075] Running
	I0311 20:26:47.136125   27491 system_pods.go:61] "etcd-ha-834040-m03" [554134f9-440a-4fce-8af9-f25a1a336610] Running
	I0311 20:26:47.136130   27491 system_pods.go:61] "kindnet-bw656" [edb13135-e5b5-46df-922e-5ebfb444c219] Running
	I0311 20:26:47.136147   27491 system_pods.go:61] "kindnet-cf888" [a0eb1481-fce7-4ede-9727-28ff9f3475b1] Running
	I0311 20:26:47.136154   27491 system_pods.go:61] "kindnet-rqcq6" [7c368ac4-0fa3-4185-98a7-40df481939ee] Running
	I0311 20:26:47.136157   27491 system_pods.go:61] "kube-apiserver-ha-834040" [f1a21652-f5f0-4ff4-a181-9719fbb72320] Running
	I0311 20:26:47.136160   27491 system_pods.go:61] "kube-apiserver-ha-834040-m02" [eaadd58d-4c00-4dd8-94fe-2d28bed895f5] Running
	I0311 20:26:47.136163   27491 system_pods.go:61] "kube-apiserver-ha-834040-m03" [60f94aa4-4332-4f32-b9ed-326492680654] Running
	I0311 20:26:47.136166   27491 system_pods.go:61] "kube-controller-manager-ha-834040" [48fff24f-f490-4cad-ae02-67dd35208820] Running
	I0311 20:26:47.136172   27491 system_pods.go:61] "kube-controller-manager-ha-834040-m02" [a3418676-a178-4f18-accd-cbc835234b6f] Running
	I0311 20:26:47.136175   27491 system_pods.go:61] "kube-controller-manager-ha-834040-m03" [44b609b0-feee-4b2d-a414-258c11a66810] Running
	I0311 20:26:47.136178   27491 system_pods.go:61] "kube-proxy-4kkwc" [bd3491fa-75a9-46ff-b61e-a818c82f1fc6] Running
	I0311 20:26:47.136180   27491 system_pods.go:61] "kube-proxy-dsjx4" [b8dccd4a-d900-4c56-8861-4c19dbda4a31] Running
	I0311 20:26:47.136183   27491 system_pods.go:61] "kube-proxy-h8svv" [3a7973ca-9a35-4190-8845-cc685619b093] Running
	I0311 20:26:47.136188   27491 system_pods.go:61] "kube-scheduler-ha-834040" [665bbcfc-d34c-46f7-8c3c-73380466fb35] Running
	I0311 20:26:47.136191   27491 system_pods.go:61] "kube-scheduler-ha-834040-m02" [3429847c-a119-4dba-bcfc-f41e6bd8b351] Running
	I0311 20:26:47.136196   27491 system_pods.go:61] "kube-scheduler-ha-834040-m03" [84aad696-7a60-4242-a214-17c9e4cf2bf6] Running
	I0311 20:26:47.136199   27491 system_pods.go:61] "kube-vip-ha-834040" [d539e386-31f6-4b7c-9e36-8a413b82a4a8] Running
	I0311 20:26:47.136202   27491 system_pods.go:61] "kube-vip-ha-834040-m02" [59d64aa5-94ab-44d5-a42e-5453eb2c0b37] Running
	I0311 20:26:47.136205   27491 system_pods.go:61] "kube-vip-ha-834040-m03" [6a95c6cb-4f07-49d7-abaa-facdc4b0e799] Running
	I0311 20:26:47.136208   27491 system_pods.go:61] "storage-provisioner" [bbc64228-86a0-4e0c-9eef-f4644439ca13] Running
	I0311 20:26:47.136213   27491 system_pods.go:74] duration metric: took 180.324544ms to wait for pod list to return data ...
	I0311 20:26:47.136222   27491 default_sa.go:34] waiting for default service account to be created ...
	I0311 20:26:47.320676   27491 request.go:629] Waited for 184.386345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0311 20:26:47.320767   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/default/serviceaccounts
	I0311 20:26:47.320779   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:47.320789   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:47.320799   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:47.326192   27491 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0311 20:26:47.326412   27491 default_sa.go:45] found service account: "default"
	I0311 20:26:47.326429   27491 default_sa.go:55] duration metric: took 190.197475ms for default service account to be created ...
	I0311 20:26:47.326438   27491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 20:26:47.520810   27491 request.go:629] Waited for 194.258488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:47.520858   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/namespaces/kube-system/pods
	I0311 20:26:47.520863   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:47.520871   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:47.520875   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:47.528205   27491 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0311 20:26:47.535274   27491 system_pods.go:86] 24 kube-system pods found
	I0311 20:26:47.535299   27491 system_pods.go:89] "coredns-5dd5756b68-d6f2x" [ddc7bef4-f6c5-442f-8149-e52a1822986d] Running
	I0311 20:26:47.535304   27491 system_pods.go:89] "coredns-5dd5756b68-kq47h" [f2a70553-206f-4d11-b32f-01ddd30db8ec] Running
	I0311 20:26:47.535308   27491 system_pods.go:89] "etcd-ha-834040" [76aef9d7-e8f7-4675-92db-614a3723f8b0] Running
	I0311 20:26:47.535312   27491 system_pods.go:89] "etcd-ha-834040-m02" [c87b59c2-5dcd-4217-9d64-1eab2ecf0075] Running
	I0311 20:26:47.535316   27491 system_pods.go:89] "etcd-ha-834040-m03" [554134f9-440a-4fce-8af9-f25a1a336610] Running
	I0311 20:26:47.535320   27491 system_pods.go:89] "kindnet-bw656" [edb13135-e5b5-46df-922e-5ebfb444c219] Running
	I0311 20:26:47.535324   27491 system_pods.go:89] "kindnet-cf888" [a0eb1481-fce7-4ede-9727-28ff9f3475b1] Running
	I0311 20:26:47.535327   27491 system_pods.go:89] "kindnet-rqcq6" [7c368ac4-0fa3-4185-98a7-40df481939ee] Running
	I0311 20:26:47.535331   27491 system_pods.go:89] "kube-apiserver-ha-834040" [f1a21652-f5f0-4ff4-a181-9719fbb72320] Running
	I0311 20:26:47.535334   27491 system_pods.go:89] "kube-apiserver-ha-834040-m02" [eaadd58d-4c00-4dd8-94fe-2d28bed895f5] Running
	I0311 20:26:47.535338   27491 system_pods.go:89] "kube-apiserver-ha-834040-m03" [60f94aa4-4332-4f32-b9ed-326492680654] Running
	I0311 20:26:47.535345   27491 system_pods.go:89] "kube-controller-manager-ha-834040" [48fff24f-f490-4cad-ae02-67dd35208820] Running
	I0311 20:26:47.535349   27491 system_pods.go:89] "kube-controller-manager-ha-834040-m02" [a3418676-a178-4f18-accd-cbc835234b6f] Running
	I0311 20:26:47.535354   27491 system_pods.go:89] "kube-controller-manager-ha-834040-m03" [44b609b0-feee-4b2d-a414-258c11a66810] Running
	I0311 20:26:47.535358   27491 system_pods.go:89] "kube-proxy-4kkwc" [bd3491fa-75a9-46ff-b61e-a818c82f1fc6] Running
	I0311 20:26:47.535365   27491 system_pods.go:89] "kube-proxy-dsjx4" [b8dccd4a-d900-4c56-8861-4c19dbda4a31] Running
	I0311 20:26:47.535369   27491 system_pods.go:89] "kube-proxy-h8svv" [3a7973ca-9a35-4190-8845-cc685619b093] Running
	I0311 20:26:47.535375   27491 system_pods.go:89] "kube-scheduler-ha-834040" [665bbcfc-d34c-46f7-8c3c-73380466fb35] Running
	I0311 20:26:47.535379   27491 system_pods.go:89] "kube-scheduler-ha-834040-m02" [3429847c-a119-4dba-bcfc-f41e6bd8b351] Running
	I0311 20:26:47.535391   27491 system_pods.go:89] "kube-scheduler-ha-834040-m03" [84aad696-7a60-4242-a214-17c9e4cf2bf6] Running
	I0311 20:26:47.535397   27491 system_pods.go:89] "kube-vip-ha-834040" [d539e386-31f6-4b7c-9e36-8a413b82a4a8] Running
	I0311 20:26:47.535400   27491 system_pods.go:89] "kube-vip-ha-834040-m02" [59d64aa5-94ab-44d5-a42e-5453eb2c0b37] Running
	I0311 20:26:47.535404   27491 system_pods.go:89] "kube-vip-ha-834040-m03" [6a95c6cb-4f07-49d7-abaa-facdc4b0e799] Running
	I0311 20:26:47.535407   27491 system_pods.go:89] "storage-provisioner" [bbc64228-86a0-4e0c-9eef-f4644439ca13] Running
	I0311 20:26:47.535415   27491 system_pods.go:126] duration metric: took 208.971727ms to wait for k8s-apps to be running ...
	I0311 20:26:47.535423   27491 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 20:26:47.535469   27491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:26:47.553919   27491 system_svc.go:56] duration metric: took 18.485702ms WaitForService to wait for kubelet
	I0311 20:26:47.553950   27491 kubeadm.go:576] duration metric: took 40.273942997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:26:47.553971   27491 node_conditions.go:102] verifying NodePressure condition ...
	I0311 20:26:47.720295   27491 request.go:629] Waited for 166.25817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.128:8443/api/v1/nodes
	I0311 20:26:47.720345   27491 round_trippers.go:463] GET https://192.168.39.128:8443/api/v1/nodes
	I0311 20:26:47.720353   27491 round_trippers.go:469] Request Headers:
	I0311 20:26:47.720365   27491 round_trippers.go:473]     Accept: application/json, */*
	I0311 20:26:47.720371   27491 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0311 20:26:47.724896   27491 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0311 20:26:47.726176   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:26:47.726204   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:26:47.726216   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:26:47.726221   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:26:47.726226   27491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 20:26:47.726229   27491 node_conditions.go:123] node cpu capacity is 2
	I0311 20:26:47.726233   27491 node_conditions.go:105] duration metric: took 172.255909ms to run NodePressure ...
	I0311 20:26:47.726246   27491 start.go:240] waiting for startup goroutines ...
	I0311 20:26:47.726268   27491 start.go:254] writing updated cluster config ...
	I0311 20:26:47.726546   27491 ssh_runner.go:195] Run: rm -f paused
	I0311 20:26:47.778492   27491 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 20:26:47.780590   27491 out.go:177] * Done! kubectl is now configured to use "ha-834040" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.364651651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189083364629050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=795c9d31-81c3-4436-a53c-5f5748b848e6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.365778510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6c7b442-e936-451a-aa81-e495cf0ed5b5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.365853870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6c7b442-e936-451a-aa81-e495cf0ed5b5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.366223187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710188810909650029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710188690043049602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710188690043182789,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626343031244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626308762017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:651df645b80859aac3940b6c46f612b7dfa6e63196eea16e71a4699e1dacd90d,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710188625312421373,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239,PodSandboxId:97f4eaedf7381336de1f270c1327a82bac27c26c771a5df3e32cc259ef113390,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710188623496900367,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710188621618767385,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de629d59c426e67a341320405ba6e2db0a43a77097e61b6123f4636359ee3412,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710188602988167367,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710188600841991262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: et
cd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8,PodSandboxId:ba0d4adac5c720e3d7577394479b4143283e2c9ddcc61e2ab1e57dcd4664342a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710188600790600914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710188600790390415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944,PodSandboxId:1d3a02c48636bed52fd7f56fa9670f0a3c8e5e4f596b89faa29081f66f463fc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710188600668037923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6c7b442-e936-451a-aa81-e495cf0ed5b5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.418308411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d9be1b9-45d8-430c-a1e6-434447873834 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.418376533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d9be1b9-45d8-430c-a1e6-434447873834 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.420365951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=425a9d9b-c81f-40d7-afe2-e44f468f6e33 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.420942060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189083420917560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=425a9d9b-c81f-40d7-afe2-e44f468f6e33 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.422018668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bcc4f6b-54f2-4373-9d7b-32dd06458f3a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.422192673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bcc4f6b-54f2-4373-9d7b-32dd06458f3a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.422566803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710188810909650029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710188690043049602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710188690043182789,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626343031244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626308762017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:651df645b80859aac3940b6c46f612b7dfa6e63196eea16e71a4699e1dacd90d,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710188625312421373,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239,PodSandboxId:97f4eaedf7381336de1f270c1327a82bac27c26c771a5df3e32cc259ef113390,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710188623496900367,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710188621618767385,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de629d59c426e67a341320405ba6e2db0a43a77097e61b6123f4636359ee3412,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710188602988167367,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710188600841991262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: et
cd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8,PodSandboxId:ba0d4adac5c720e3d7577394479b4143283e2c9ddcc61e2ab1e57dcd4664342a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710188600790600914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710188600790390415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944,PodSandboxId:1d3a02c48636bed52fd7f56fa9670f0a3c8e5e4f596b89faa29081f66f463fc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710188600668037923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bcc4f6b-54f2-4373-9d7b-32dd06458f3a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.467522107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b200d99-67f0-4169-9190-0968f7a00205 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.467624089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b200d99-67f0-4169-9190-0968f7a00205 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.468989242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31ed1e17-4688-45da-bbcc-3f41937d6133 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.469612766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189083469543531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31ed1e17-4688-45da-bbcc-3f41937d6133 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.470184735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc2f7f2a-b30f-46b4-beed-8f123515f55f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.470237619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc2f7f2a-b30f-46b4-beed-8f123515f55f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.470502227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710188810909650029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710188690043049602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710188690043182789,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626343031244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626308762017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:651df645b80859aac3940b6c46f612b7dfa6e63196eea16e71a4699e1dacd90d,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710188625312421373,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239,PodSandboxId:97f4eaedf7381336de1f270c1327a82bac27c26c771a5df3e32cc259ef113390,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710188623496900367,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710188621618767385,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de629d59c426e67a341320405ba6e2db0a43a77097e61b6123f4636359ee3412,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710188602988167367,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710188600841991262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: et
cd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8,PodSandboxId:ba0d4adac5c720e3d7577394479b4143283e2c9ddcc61e2ab1e57dcd4664342a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710188600790600914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710188600790390415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944,PodSandboxId:1d3a02c48636bed52fd7f56fa9670f0a3c8e5e4f596b89faa29081f66f463fc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710188600668037923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc2f7f2a-b30f-46b4-beed-8f123515f55f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.511947846Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=084906e6-f3bd-4056-9761-791a955b30c0 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.512051923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=084906e6-f3bd-4056-9761-791a955b30c0 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.512949515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edbfb2dd-4be1-48a9-8046-7fd53d206a68 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.513492065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189083513467097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edbfb2dd-4be1-48a9-8046-7fd53d206a68 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.514058366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05982859-f99a-4e88-bd5c-4e73448b9113 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.514219621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05982859-f99a-4e88-bd5c-4e73448b9113 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:31:23 ha-834040 crio[675]: time="2024-03-11 20:31:23.514504741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710188810909650029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710188690043049602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710188690043182789,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626343031244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710188626308762017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotation
s:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:651df645b80859aac3940b6c46f612b7dfa6e63196eea16e71a4699e1dacd90d,PodSandboxId:023c0d7d16ddd7c9611dfa16f7162aadb33b573fbf584364acdf6d31594cb88e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710188625312421373,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239,PodSandboxId:97f4eaedf7381336de1f270c1327a82bac27c26c771a5df3e32cc259ef113390,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710188623496900367,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710188621618767385,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de629d59c426e67a341320405ba6e2db0a43a77097e61b6123f4636359ee3412,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710188602988167367,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710188600841991262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: et
cd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8,PodSandboxId:ba0d4adac5c720e3d7577394479b4143283e2c9ddcc61e2ab1e57dcd4664342a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710188600790600914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710188600790390415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944,PodSandboxId:1d3a02c48636bed52fd7f56fa9670f0a3c8e5e4f596b89faa29081f66f463fc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710188600668037923,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05982859-f99a-4e88-bd5c-4e73448b9113 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	251e9f2d7df5c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   417164b9b0cb4       busybox-5b5d89c9d6-d62cw
	afc1d1d2e164d       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago       Running             kube-vip                  1                   dcb18e5f12de1       kube-vip-ha-834040
	48ff55cc7dd7c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   023c0d7d16ddd       storage-provisioner
	7be345e0f22ca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   4860ab9172968       coredns-5dd5756b68-kq47h
	6926d89f93fa7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   94384bd2f8c98       coredns-5dd5756b68-d6f2x
	651df645b8085       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   023c0d7d16ddd       storage-provisioner
	bde1337579436       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago       Running             kindnet-cni               0                   97f4eaedf7381       kindnet-bw656
	ab5ff27a1d4cb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago       Running             kube-proxy                0                   a9e018e6df6e7       kube-proxy-h8svv
	de629d59c426e       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     8 minutes ago       Exited              kube-vip                  0                   dcb18e5f12de1       kube-vip-ha-834040
	4395af23a1752       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      8 minutes ago       Running             etcd                      0                   3e8bbccfbf388       etcd-ha-834040
	abfa6c7eaf9de       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      8 minutes ago       Running             kube-controller-manager   0                   ba0d4adac5c72       kube-controller-manager-ha-834040
	4b273e6fedf1a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      8 minutes ago       Running             kube-scheduler            0                   85d4eab358f29       kube-scheduler-ha-834040
	d2c6fc6f4ca02       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      8 minutes ago       Running             kube-apiserver            0                   1d3a02c48636b       kube-apiserver-ha-834040
	
	
	==> coredns [6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc] <==
	[INFO] 10.244.0.4:50316 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110698s
	[INFO] 10.244.1.2:34160 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001878377s
	[INFO] 10.244.1.2:53820 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100107s
	[INFO] 10.244.1.2:35233 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128279s
	[INFO] 10.244.1.2:40701 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113335s
	[INFO] 10.244.1.2:51999 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000206194s
	[INFO] 10.244.2.2:36958 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164814s
	[INFO] 10.244.2.2:39443 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195028s
	[INFO] 10.244.2.2:39519 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001598365s
	[INFO] 10.244.2.2:57263 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000263661s
	[INFO] 10.244.0.4:58360 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097628s
	[INFO] 10.244.0.4:34351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182428s
	[INFO] 10.244.1.2:54939 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000278877s
	[INFO] 10.244.1.2:37033 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194177s
	[INFO] 10.244.1.2:37510 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190608s
	[INFO] 10.244.2.2:41536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108104s
	[INFO] 10.244.2.2:41561 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122082s
	[INFO] 10.244.0.4:42660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221566s
	[INFO] 10.244.0.4:53159 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000188136s
	[INFO] 10.244.0.4:41046 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100215s
	[INFO] 10.244.0.4:50387 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176539s
	[INFO] 10.244.1.2:54773 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120996s
	[INFO] 10.244.1.2:51952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119653s
	[INFO] 10.244.2.2:59116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134078s
	[INFO] 10.244.2.2:47917 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128001s
	
	
	==> coredns [7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450] <==
	[INFO] 10.244.0.4:51252 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.003843725s
	[INFO] 10.244.0.4:37817 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.010513423s
	[INFO] 10.244.1.2:41192 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00179634s
	[INFO] 10.244.2.2:57444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215144s
	[INFO] 10.244.2.2:56538 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00210828s
	[INFO] 10.244.0.4:58455 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118841s
	[INFO] 10.244.0.4:49345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003481053s
	[INFO] 10.244.0.4:56716 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187984s
	[INFO] 10.244.0.4:35412 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160258s
	[INFO] 10.244.1.2:56957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150599s
	[INFO] 10.244.1.2:53790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450755s
	[INFO] 10.244.1.2:53927 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207107s
	[INFO] 10.244.2.2:55011 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744357s
	[INFO] 10.244.2.2:59931 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000316475s
	[INFO] 10.244.2.2:52694 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184762s
	[INFO] 10.244.2.2:51472 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080603s
	[INFO] 10.244.0.4:33893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185444s
	[INFO] 10.244.0.4:54135 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072181s
	[INFO] 10.244.1.2:36921 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189721s
	[INFO] 10.244.2.2:60407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015337s
	[INFO] 10.244.2.2:45057 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177157s
	[INFO] 10.244.1.2:52652 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273969s
	[INFO] 10.244.1.2:41042 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160192s
	[INFO] 10.244.2.2:55743 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233222s
	[INFO] 10.244.2.2:43090 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000228333s
	
	
	==> describe nodes <==
	Name:               ha-834040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T20_23_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:31:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:27:01 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:27:01 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:27:01 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:27:01 +0000   Mon, 11 Mar 2024 20:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-834040
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6cb0aa00d5a4d388da50e20e0a9ccef
	  System UUID:                f6cb0aa0-0d5a-4d38-8da5-0e20e0a9ccef
	  Boot ID:                    47b6723c-3999-42a9-a19b-9f1c67aaecb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-d62cw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 coredns-5dd5756b68-d6f2x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m43s
	  kube-system                 coredns-5dd5756b68-kq47h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m43s
	  kube-system                 etcd-ha-834040                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m56s
	  kube-system                 kindnet-bw656                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m43s
	  kube-system                 kube-apiserver-ha-834040             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-controller-manager-ha-834040    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-proxy-h8svv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 kube-scheduler-ha-834040             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-vip-ha-834040                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m41s  kube-proxy       
	  Normal  Starting                 7m56s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m56s  kubelet          Node ha-834040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m56s  kubelet          Node ha-834040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m56s  kubelet          Node ha-834040 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m44s  node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal  NodeReady                7m39s  kubelet          Node ha-834040 status is now: NodeReady
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal  RegisteredNode           5m2s   node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	
	
	Name:               ha-834040-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_24_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:24:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:27:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 11 Mar 2024 20:27:08 +0000   Mon, 11 Mar 2024 20:28:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 11 Mar 2024 20:27:08 +0000   Mon, 11 Mar 2024 20:28:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 11 Mar 2024 20:27:08 +0000   Mon, 11 Mar 2024 20:28:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 11 Mar 2024 20:27:08 +0000   Mon, 11 Mar 2024 20:28:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-834040-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d932b403e92c478480bfc9080f018c7a
	  System UUID:                d932b403-e92c-4784-80bf-c9080f018c7a
	  Boot ID:                    21b79699-e0c8-443f-8316-dd2d55446b7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-h9jx5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 etcd-ha-834040-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m44s
	  kube-system                 kindnet-rqcq6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m45s
	  kube-system                 kube-apiserver-ha-834040-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 kube-controller-manager-ha-834040-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 kube-proxy-dsjx4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 kube-scheduler-ha-834040-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 kube-vip-ha-834040-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m26s  kube-proxy       
	  Normal  RegisteredNode  6m44s  node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode  6m16s  node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode  5m2s   node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  NodeNotReady    2m52s  node-controller  Node ha-834040-m02 status is now: NodeNotReady
	
	
	Name:               ha-834040-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_26_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:26:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:31:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:27:03 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:27:03 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:27:03 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:27:03 +0000   Mon, 11 Mar 2024 20:26:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-834040-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6ff34b6936e4e2fada32a020c96ac8f
	  System UUID:                e6ff34b6-936e-4e2f-ada3-2a020c96ac8f
	  Boot ID:                    d1e0d295-4977-4e81-8d43-f63a102c1a44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-mx5b4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 etcd-ha-834040-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m20s
	  kube-system                 kindnet-cf888                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m21s
	  kube-system                 kube-apiserver-ha-834040-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-controller-manager-ha-834040-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-proxy-4kkwc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-scheduler-ha-834040-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-vip-ha-834040-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m6s   kube-proxy       
	  Normal  RegisteredNode  5m21s  node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal  RegisteredNode  5m19s  node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal  RegisteredNode  5m2s   node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	
	
	Name:               ha-834040-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_27_30_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:27:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:31:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:28:00 +0000   Mon, 11 Mar 2024 20:27:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:28:00 +0000   Mon, 11 Mar 2024 20:27:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:28:00 +0000   Mon, 11 Mar 2024 20:27:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:28:00 +0000   Mon, 11 Mar 2024 20:27:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-834040-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 01d975a4d97b45958b00e8cebd68bf34
	  System UUID:                01d975a4-d97b-4595-8b00-e8cebd68bf34
	  Boot ID:                    20c51306-7a45-415f-959d-65a8140505c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gdbjb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-wc99r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x5 over 3m55s)  kubelet          Node ha-834040-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x5 over 3m55s)  kubelet          Node ha-834040-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x5 over 3m55s)  kubelet          Node ha-834040-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal  NodeReady                3m46s                  kubelet          Node ha-834040-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar11 20:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051930] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043288] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.541344] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.468506] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[Mar11 20:23] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.744921] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.061444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067061] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.157638] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.161215] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.262542] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +5.181266] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.062600] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.584713] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.482512] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.376234] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.096131] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.894025] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.119032] kauditd_printk_skb: 58 callbacks suppressed
	[Mar11 20:24] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1] <==
	{"level":"warn","ts":"2024-03-11T20:31:23.755335Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.794475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.828352Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.836704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.840668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.852276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.855348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.858669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.864293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.874816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.87804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.886046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.892602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.899008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.903022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.907169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.914396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.920063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.925606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.929568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.932645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.937724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.943862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.949849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-11T20:31:23.955582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fa515506e66f6916","from":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:31:24 up 8 min,  0 users,  load average: 0.39, 0.42, 0.25
	Linux ha-834040 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bde13375794363aa708c796adf81c991290316a9abb1584bd0d1a6b7fcbc1239] <==
	I0311 20:30:44.163821       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:30:54.170345       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:30:54.170421       1 main.go:227] handling current node
	I0311 20:30:54.170443       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:30:54.170467       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:30:54.170612       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:30:54.170634       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:30:54.170691       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:30:54.170709       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:31:04.178055       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:31:04.178519       1 main.go:227] handling current node
	I0311 20:31:04.178600       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:31:04.178677       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:31:04.178886       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:31:04.178918       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:31:04.179029       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:31:04.179141       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:31:14.185637       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:31:14.185848       1 main.go:227] handling current node
	I0311 20:31:14.185891       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:31:14.185925       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:31:14.186168       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:31:14.186213       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:31:14.186284       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:31:14.186302       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944] <==
	Trace[1545161709]: [4.626139808s] [4.626139808s] END
	I0311 20:24:53.765146       1 trace.go:236] Trace[401973741]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:814b6913-b89b-4423-b345-d52032cab5fb,client:192.168.39.101,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:24:46.625) (total time: 7139ms):
	Trace[401973741]: [7.139891131s] [7.139891131s] END
	I0311 20:24:53.766648       1 trace.go:236] Trace[2066086163]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7d46111c-f595-403d-8bcc-203f0f24e52c,client:192.168.39.101,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:24:47.542) (total time: 6223ms):
	Trace[2066086163]: [6.223644822s] [6.223644822s] END
	I0311 20:24:53.767465       1 trace.go:236] Trace[186188628]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ef41ab4c-daf8-4540-9d88-ee64ffbbd3c5,client:192.168.39.101,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:24:47.541) (total time: 6225ms):
	Trace[186188628]: [6.22596821s] [6.22596821s] END
	I0311 20:24:53.767772       1 trace.go:236] Trace[448873650]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:709c42c9-1ef3-4c8c-89b3-f722acb945d1,client:192.168.39.101,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:24:48.981) (total time: 4786ms):
	Trace[448873650]: [4.786497636s] [4.786497636s] END
	I0311 20:27:30.140778       1 trace.go:236] Trace[947205264]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:926c942a-690c-476a-811e-59e2651730cc,client:192.168.39.44,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (11-Mar-2024 20:27:29.602) (total time: 538ms):
	Trace[947205264]: ["Create etcd3" audit-id:926c942a-690c-476a-811e-59e2651730cc,key:/events/default/ha-834040-m04.17bbcfb2408ab3c3,type:*core.Event,resource:events 537ms (20:27:29.602)
	Trace[947205264]:  ---"TransformToStorage succeeded" 230ms (20:27:29.833)
	Trace[947205264]:  ---"Txn call succeeded" 307ms (20:27:30.140)]
	Trace[947205264]: [538.03371ms] [538.03371ms] END
	I0311 20:27:30.142812       1 trace.go:236] Trace[928836211]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:3a61f083-81f6-4428-8739-11361a1aa52b,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (11-Mar-2024 20:27:29.622) (total time: 519ms):
	Trace[928836211]: [519.864976ms] [519.864976ms] END
	I0311 20:27:30.143832       1 trace.go:236] Trace[1655549944]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f198e904-57e8-4ad7-a738-8d0e832e0ba8,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (11-Mar-2024 20:27:29.626) (total time: 516ms):
	Trace[1655549944]: [516.9239ms] [516.9239ms] END
	I0311 20:27:30.154288       1 trace.go:236] Trace[483810055]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:51e0cce3-1513-4212-962f-c083ba484c2c,client:192.168.39.128,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-834040-m04,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:ttl-controller,verb:PATCH (11-Mar-2024 20:27:29.624) (total time: 529ms):
	Trace[483810055]: ["GuaranteedUpdate etcd3" audit-id:51e0cce3-1513-4212-962f-c083ba484c2c,key:/minions/ha-834040-m04,type:*core.Node,resource:nodes 529ms (20:27:29.624)
	Trace[483810055]:  ---"Txn call completed" 204ms (20:27:29.833)
	Trace[483810055]:  ---"Txn call completed" 319ms (20:27:30.153)]
	Trace[483810055]: ---"About to apply patch" 205ms (20:27:29.833)
	Trace[483810055]: ---"Object stored in database" 319ms (20:27:30.154)
	Trace[483810055]: [529.345251ms] [529.345251ms] END
	
	
	==> kube-controller-manager [abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8] <==
	I0311 20:26:49.318132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="29.194831ms"
	I0311 20:26:49.319163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="101.457µs"
	I0311 20:26:51.039886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.283516ms"
	I0311 20:26:51.040448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.855µs"
	I0311 20:26:51.378805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="90.609µs"
	I0311 20:26:51.614801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.290793ms"
	I0311 20:26:51.614915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.225µs"
	I0311 20:26:51.659412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.06168ms"
	I0311 20:26:51.659779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="83.897µs"
	E0311 20:27:27.977493       1 certificate_controller.go:146] Sync csr-pnzbp failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-pnzbp": the object has been modified; please apply your changes to the latest version and try again
	E0311 20:27:27.982824       1 certificate_controller.go:146] Sync csr-pnzbp failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-pnzbp": the object has been modified; please apply your changes to the latest version and try again
	I0311 20:27:29.595787       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-834040-m04\" does not exist"
	I0311 20:27:29.840525       1 range_allocator.go:380] "Set node PodCIDR" node="ha-834040-m04" podCIDRs=["10.244.3.0/24"]
	I0311 20:27:30.148695       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wc99r"
	I0311 20:27:30.148756       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gdbjb"
	I0311 20:27:30.277285       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-jckf6"
	I0311 20:27:30.279557       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-lhqdl"
	I0311 20:27:30.420455       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-btkbp"
	I0311 20:27:30.432844       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-9ksnv"
	I0311 20:27:34.246405       1 event.go:307] "Event occurred" object="ha-834040-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller"
	I0311 20:27:34.266245       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-834040-m04"
	I0311 20:27:37.380506       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-834040-m04"
	I0311 20:28:31.258348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-834040-m04"
	I0311 20:28:31.440290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.30239ms"
	I0311 20:28:31.441311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="141.98µs"
	
	
	==> kube-proxy [ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25] <==
	I0311 20:23:41.879943       1 server_others.go:69] "Using iptables proxy"
	I0311 20:23:41.908431       1 node.go:141] Successfully retrieved node IP: 192.168.39.128
	I0311 20:23:42.020698       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:23:42.020756       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:23:42.036364       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:23:42.036526       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:23:42.037206       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:23:42.037316       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:23:42.042327       1 config.go:315] "Starting node config controller"
	I0311 20:23:42.042430       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:23:42.048456       1 config.go:188] "Starting service config controller"
	I0311 20:23:42.048469       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:23:42.048491       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:23:42.048502       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:23:42.143225       1 shared_informer.go:318] Caches are synced for node config
	I0311 20:23:42.148691       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:23:42.148672       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861] <==
	W0311 20:23:24.248261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 20:23:24.248399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 20:23:24.253937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 20:23:24.253997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 20:23:25.214421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:23:25.214471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 20:23:25.245746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 20:23:25.245830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 20:23:25.310965       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 20:23:25.311141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 20:23:25.339716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 20:23:25.339771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 20:23:25.418715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 20:23:25.418795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 20:23:25.483360       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 20:23:25.484056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 20:23:25.664472       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 20:23:25.664528       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0311 20:23:28.126417       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0311 20:26:48.785891       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-h9jx5\": pod busybox-5b5d89c9d6-h9jx5 is already assigned to node \"ha-834040-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-h9jx5" node="ha-834040-m02"
	E0311 20:26:48.793180       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 34e02cf4-79e4-4bbc-ae43-c0f5ef80637a(default/busybox-5b5d89c9d6-h9jx5) wasn't assumed so cannot be forgotten"
	E0311 20:26:48.793621       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-h9jx5\": pod busybox-5b5d89c9d6-h9jx5 is already assigned to node \"ha-834040-m02\"" pod="default/busybox-5b5d89c9d6-h9jx5"
	I0311 20:26:48.793838       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-h9jx5" node="ha-834040-m02"
	E0311 20:27:30.190479       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wc99r\": pod kube-proxy-wc99r is already assigned to node \"ha-834040-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wc99r" node="ha-834040-m04"
	E0311 20:27:30.195971       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wc99r\": pod kube-proxy-wc99r is already assigned to node \"ha-834040-m04\"" pod="kube-system/kube-proxy-wc99r"
	
	
	==> kubelet <==
	Mar 11 20:26:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:26:48 ha-834040 kubelet[1373]: I0311 20:26:48.831597    1373 topology_manager.go:215] "Topology Admit Handler" podUID="ea39821f-426d-43bf-a955-77e3a308239e" podNamespace="default" podName="busybox-5b5d89c9d6-d62cw"
	Mar 11 20:26:48 ha-834040 kubelet[1373]: W0311 20:26:48.840824    1373 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-834040" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-834040' and this object
	Mar 11 20:26:48 ha-834040 kubelet[1373]: E0311 20:26:48.840924    1373 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-834040" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-834040' and this object
	Mar 11 20:26:48 ha-834040 kubelet[1373]: I0311 20:26:48.940186    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv5r8\" (UniqueName: \"kubernetes.io/projected/ea39821f-426d-43bf-a955-77e3a308239e-kube-api-access-xv5r8\") pod \"busybox-5b5d89c9d6-d62cw\" (UID: \"ea39821f-426d-43bf-a955-77e3a308239e\") " pod="default/busybox-5b5d89c9d6-d62cw"
	Mar 11 20:27:27 ha-834040 kubelet[1373]: E0311 20:27:27.614646    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:27:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:27:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:27:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:27:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:28:27 ha-834040 kubelet[1373]: E0311 20:28:27.614520    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:28:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:28:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:28:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:28:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:29:27 ha-834040 kubelet[1373]: E0311 20:29:27.615811    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:29:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:29:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:29:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:29:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:30:27 ha-834040 kubelet[1373]: E0311 20:30:27.613413    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:30:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:30:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:30:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:30:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-834040 -n ha-834040
helpers_test.go:261: (dbg) Run:  kubectl --context ha-834040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (61.08s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (386.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-834040 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-834040 -v=7 --alsologtostderr
E0311 20:31:58.808343   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:32:26.491729   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:32:38.935997   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-834040 -v=7 --alsologtostderr: exit status 82 (2m2.704520619s)

                                                
                                                
-- stdout --
	* Stopping node "ha-834040-m04"  ...
	* Stopping node "ha-834040-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:31:25.510711   32857 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:31:25.510978   32857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:31:25.510991   32857 out.go:304] Setting ErrFile to fd 2...
	I0311 20:31:25.510997   32857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:31:25.511224   32857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:31:25.511491   32857 out.go:298] Setting JSON to false
	I0311 20:31:25.511591   32857 mustload.go:65] Loading cluster: ha-834040
	I0311 20:31:25.511937   32857 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:31:25.512034   32857 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:31:25.512226   32857 mustload.go:65] Loading cluster: ha-834040
	I0311 20:31:25.512392   32857 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:31:25.512442   32857 stop.go:39] StopHost: ha-834040-m04
	I0311 20:31:25.512836   32857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:25.512891   32857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:25.528157   32857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0311 20:31:25.528570   32857 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:25.529174   32857 main.go:141] libmachine: Using API Version  1
	I0311 20:31:25.529197   32857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:25.529577   32857 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:25.532301   32857 out.go:177] * Stopping node "ha-834040-m04"  ...
	I0311 20:31:25.533860   32857 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0311 20:31:25.533893   32857 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:31:25.534110   32857 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0311 20:31:25.534137   32857 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:31:25.536652   32857 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:25.537106   32857 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:27:11 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:31:25.537141   32857 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:31:25.537260   32857 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:31:25.537415   32857 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:31:25.537583   32857 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:31:25.537708   32857 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:31:25.626324   32857 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0311 20:31:25.681157   32857 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0311 20:31:25.737470   32857 main.go:141] libmachine: Stopping "ha-834040-m04"...
	I0311 20:31:25.737512   32857 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:31:25.738977   32857 main.go:141] libmachine: (ha-834040-m04) Calling .Stop
	I0311 20:31:25.742173   32857 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 0/120
	I0311 20:31:26.743515   32857 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 1/120
	I0311 20:31:27.745300   32857 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:31:27.746616   32857 main.go:141] libmachine: Machine "ha-834040-m04" was stopped.
	I0311 20:31:27.746636   32857 stop.go:75] duration metric: took 2.212775925s to stop
	I0311 20:31:27.746656   32857 stop.go:39] StopHost: ha-834040-m03
	I0311 20:31:27.746956   32857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:31:27.746990   32857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:31:27.761697   32857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0311 20:31:27.762052   32857 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:31:27.762545   32857 main.go:141] libmachine: Using API Version  1
	I0311 20:31:27.762575   32857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:31:27.762975   32857 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:31:27.765334   32857 out.go:177] * Stopping node "ha-834040-m03"  ...
	I0311 20:31:27.766845   32857 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0311 20:31:27.766866   32857 main.go:141] libmachine: (ha-834040-m03) Calling .DriverName
	I0311 20:31:27.767071   32857 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0311 20:31:27.767094   32857 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHHostname
	I0311 20:31:27.770079   32857 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:27.770535   32857 main.go:141] libmachine: (ha-834040-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:84:f9", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:25:26 +0000 UTC Type:0 Mac:52:54:00:93:84:f9 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-834040-m03 Clientid:01:52:54:00:93:84:f9}
	I0311 20:31:27.770572   32857 main.go:141] libmachine: (ha-834040-m03) DBG | domain ha-834040-m03 has defined IP address 192.168.39.40 and MAC address 52:54:00:93:84:f9 in network mk-ha-834040
	I0311 20:31:27.770706   32857 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHPort
	I0311 20:31:27.770856   32857 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHKeyPath
	I0311 20:31:27.771011   32857 main.go:141] libmachine: (ha-834040-m03) Calling .GetSSHUsername
	I0311 20:31:27.771095   32857 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m03/id_rsa Username:docker}
	I0311 20:31:27.861689   32857 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0311 20:31:27.915327   32857 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0311 20:31:27.970170   32857 main.go:141] libmachine: Stopping "ha-834040-m03"...
	I0311 20:31:27.970197   32857 main.go:141] libmachine: (ha-834040-m03) Calling .GetState
	I0311 20:31:27.971842   32857 main.go:141] libmachine: (ha-834040-m03) Calling .Stop
	I0311 20:31:27.975777   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 0/120
	I0311 20:31:28.977203   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 1/120
	I0311 20:31:29.978446   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 2/120
	I0311 20:31:30.979998   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 3/120
	I0311 20:31:31.981331   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 4/120
	I0311 20:31:32.983622   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 5/120
	I0311 20:31:33.985314   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 6/120
	I0311 20:31:34.987666   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 7/120
	I0311 20:31:35.989152   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 8/120
	I0311 20:31:36.990890   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 9/120
	I0311 20:31:37.992821   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 10/120
	I0311 20:31:38.994510   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 11/120
	I0311 20:31:39.995982   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 12/120
	I0311 20:31:40.997466   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 13/120
	I0311 20:31:41.998972   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 14/120
	I0311 20:31:43.001141   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 15/120
	I0311 20:31:44.002454   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 16/120
	I0311 20:31:45.003897   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 17/120
	I0311 20:31:46.005930   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 18/120
	I0311 20:31:47.007360   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 19/120
	I0311 20:31:48.008767   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 20/120
	I0311 20:31:49.010554   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 21/120
	I0311 20:31:50.012070   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 22/120
	I0311 20:31:51.013663   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 23/120
	I0311 20:31:52.015090   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 24/120
	I0311 20:31:53.017052   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 25/120
	I0311 20:31:54.018715   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 26/120
	I0311 20:31:55.020078   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 27/120
	I0311 20:31:56.021820   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 28/120
	I0311 20:31:57.023244   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 29/120
	I0311 20:31:58.024982   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 30/120
	I0311 20:31:59.026421   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 31/120
	I0311 20:32:00.028022   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 32/120
	I0311 20:32:01.029538   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 33/120
	I0311 20:32:02.030965   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 34/120
	I0311 20:32:03.032509   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 35/120
	I0311 20:32:04.033922   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 36/120
	I0311 20:32:05.035153   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 37/120
	I0311 20:32:06.036449   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 38/120
	I0311 20:32:07.037601   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 39/120
	I0311 20:32:08.039781   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 40/120
	I0311 20:32:09.041187   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 41/120
	I0311 20:32:10.042406   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 42/120
	I0311 20:32:11.043719   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 43/120
	I0311 20:32:12.044984   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 44/120
	I0311 20:32:13.046500   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 45/120
	I0311 20:32:14.047954   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 46/120
	I0311 20:32:15.049174   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 47/120
	I0311 20:32:16.050521   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 48/120
	I0311 20:32:17.051726   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 49/120
	I0311 20:32:18.053161   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 50/120
	I0311 20:32:19.054684   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 51/120
	I0311 20:32:20.055967   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 52/120
	I0311 20:32:21.057611   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 53/120
	I0311 20:32:22.059061   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 54/120
	I0311 20:32:23.060801   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 55/120
	I0311 20:32:24.062082   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 56/120
	I0311 20:32:25.063461   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 57/120
	I0311 20:32:26.065839   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 58/120
	I0311 20:32:27.067159   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 59/120
	I0311 20:32:28.068455   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 60/120
	I0311 20:32:29.069775   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 61/120
	I0311 20:32:30.070922   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 62/120
	I0311 20:32:31.072480   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 63/120
	I0311 20:32:32.073726   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 64/120
	I0311 20:32:33.075501   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 65/120
	I0311 20:32:34.077057   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 66/120
	I0311 20:32:35.078255   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 67/120
	I0311 20:32:36.079685   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 68/120
	I0311 20:32:37.081065   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 69/120
	I0311 20:32:38.082763   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 70/120
	I0311 20:32:39.084367   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 71/120
	I0311 20:32:40.085635   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 72/120
	I0311 20:32:41.086728   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 73/120
	I0311 20:32:42.087966   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 74/120
	I0311 20:32:43.089486   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 75/120
	I0311 20:32:44.090762   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 76/120
	I0311 20:32:45.091866   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 77/120
	I0311 20:32:46.093307   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 78/120
	I0311 20:32:47.094473   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 79/120
	I0311 20:32:48.096164   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 80/120
	I0311 20:32:49.097600   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 81/120
	I0311 20:32:50.098821   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 82/120
	I0311 20:32:51.100195   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 83/120
	I0311 20:32:52.101639   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 84/120
	I0311 20:32:53.103428   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 85/120
	I0311 20:32:54.104895   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 86/120
	I0311 20:32:55.106251   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 87/120
	I0311 20:32:56.107892   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 88/120
	I0311 20:32:57.109255   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 89/120
	I0311 20:32:58.111257   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 90/120
	I0311 20:32:59.112711   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 91/120
	I0311 20:33:00.114014   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 92/120
	I0311 20:33:01.115187   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 93/120
	I0311 20:33:02.116514   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 94/120
	I0311 20:33:03.118218   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 95/120
	I0311 20:33:04.119434   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 96/120
	I0311 20:33:05.120757   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 97/120
	I0311 20:33:06.121939   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 98/120
	I0311 20:33:07.123364   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 99/120
	I0311 20:33:08.124593   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 100/120
	I0311 20:33:09.126304   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 101/120
	I0311 20:33:10.127507   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 102/120
	I0311 20:33:11.128793   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 103/120
	I0311 20:33:12.130162   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 104/120
	I0311 20:33:13.131835   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 105/120
	I0311 20:33:14.133138   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 106/120
	I0311 20:33:15.134473   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 107/120
	I0311 20:33:16.135881   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 108/120
	I0311 20:33:17.137389   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 109/120
	I0311 20:33:18.139159   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 110/120
	I0311 20:33:19.140494   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 111/120
	I0311 20:33:20.141983   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 112/120
	I0311 20:33:21.143266   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 113/120
	I0311 20:33:22.144606   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 114/120
	I0311 20:33:23.146491   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 115/120
	I0311 20:33:24.147993   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 116/120
	I0311 20:33:25.150031   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 117/120
	I0311 20:33:26.151364   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 118/120
	I0311 20:33:27.152602   32857 main.go:141] libmachine: (ha-834040-m03) Waiting for machine to stop 119/120
	I0311 20:33:28.153948   32857 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0311 20:33:28.154004   32857 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0311 20:33:28.156058   32857 out.go:177] 
	W0311 20:33:28.157391   32857 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0311 20:33:28.157405   32857 out.go:239] * 
	* 
	W0311 20:33:28.160595   32857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 20:33:28.162372   32857 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-834040 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-834040 --wait=true -v=7 --alsologtostderr
E0311 20:36:58.808059   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:37:38.935525   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-834040 --wait=true -v=7 --alsologtostderr: (4m20.778546576s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-834040
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-834040 -n ha-834040
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-834040 logs -n 25: (2.065837941s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m02:/home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m02 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04:/home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m04 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp testdata/cp-test.txt                                                | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040:/home/docker/cp-test_ha-834040-m04_ha-834040.txt                       |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040 sudo cat                                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040.txt                                 |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m02:/home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m02 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03:/home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m03 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-834040 node stop m02 -v=7                                                     | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-834040 node start m02 -v=7                                                    | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-834040 -v=7                                                           | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-834040 -v=7                                                                | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-834040 --wait=true -v=7                                                    | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:33 UTC | 11 Mar 24 20:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-834040                                                                | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:37 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:33:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:33:28.226126   33198 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:33:28.226349   33198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:33:28.226357   33198 out.go:304] Setting ErrFile to fd 2...
	I0311 20:33:28.226361   33198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:33:28.226553   33198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:33:28.227065   33198 out.go:298] Setting JSON to false
	I0311 20:33:28.227905   33198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4557,"bootTime":1710184651,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:33:28.227964   33198 start.go:139] virtualization: kvm guest
	I0311 20:33:28.230585   33198 out.go:177] * [ha-834040] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:33:28.232124   33198 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:33:28.232163   33198 notify.go:220] Checking for updates...
	I0311 20:33:28.233861   33198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:33:28.235553   33198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:33:28.237206   33198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:33:28.238616   33198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:33:28.240071   33198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:33:28.241787   33198 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:33:28.241877   33198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:33:28.242309   33198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:33:28.242345   33198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:33:28.257426   33198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0311 20:33:28.257815   33198 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:33:28.258314   33198 main.go:141] libmachine: Using API Version  1
	I0311 20:33:28.258337   33198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:33:28.258697   33198 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:33:28.258846   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:33:28.292885   33198 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 20:33:28.294287   33198 start.go:297] selected driver: kvm2
	I0311 20:33:28.294303   33198 start.go:901] validating driver "kvm2" against &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:33:28.294423   33198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:33:28.294717   33198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:33:28.294775   33198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 20:33:28.308830   33198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 20:33:28.309472   33198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:33:28.309501   33198 cni.go:84] Creating CNI manager for ""
	I0311 20:33:28.309507   33198 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0311 20:33:28.309551   33198 start.go:340] cluster config:
	{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:33:28.309674   33198 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:33:28.311515   33198 out.go:177] * Starting "ha-834040" primary control-plane node in "ha-834040" cluster
	I0311 20:33:28.312806   33198 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:33:28.312831   33198 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 20:33:28.312837   33198 cache.go:56] Caching tarball of preloaded images
	I0311 20:33:28.312908   33198 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:33:28.312921   33198 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:33:28.313041   33198 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:33:28.313220   33198 start.go:360] acquireMachinesLock for ha-834040: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:33:28.313255   33198 start.go:364] duration metric: took 19.892µs to acquireMachinesLock for "ha-834040"
	I0311 20:33:28.313268   33198 start.go:96] Skipping create...Using existing machine configuration
	I0311 20:33:28.313276   33198 fix.go:54] fixHost starting: 
	I0311 20:33:28.313506   33198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:33:28.313532   33198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:33:28.326605   33198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0311 20:33:28.327013   33198 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:33:28.327512   33198 main.go:141] libmachine: Using API Version  1
	I0311 20:33:28.327531   33198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:33:28.327802   33198 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:33:28.327982   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:33:28.328145   33198 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:33:28.329537   33198 fix.go:112] recreateIfNeeded on ha-834040: state=Running err=<nil>
	W0311 20:33:28.329571   33198 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 20:33:28.331467   33198 out.go:177] * Updating the running kvm2 "ha-834040" VM ...
	I0311 20:33:28.332916   33198 machine.go:94] provisionDockerMachine start ...
	I0311 20:33:28.332942   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:33:28.333104   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.335506   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.335915   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.335938   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.336054   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:28.336194   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.336357   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.336486   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:28.336637   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:33:28.336893   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:33:28.336908   33198 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 20:33:28.450423   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040
	
	I0311 20:33:28.450452   33198 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:33:28.450663   33198 buildroot.go:166] provisioning hostname "ha-834040"
	I0311 20:33:28.450678   33198 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:33:28.450859   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.453321   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.453738   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.453764   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.453922   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:28.454102   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.454274   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.454397   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:28.454533   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:33:28.454722   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:33:28.454736   33198 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-834040 && echo "ha-834040" | sudo tee /etc/hostname
	I0311 20:33:28.585815   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040
	
	I0311 20:33:28.585860   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.588686   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.589092   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.589116   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.589377   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:28.589566   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.589773   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.589910   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:28.590048   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:33:28.590218   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:33:28.590239   33198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-834040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-834040/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-834040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:33:28.697743   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:33:28.697775   33198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:33:28.697800   33198 buildroot.go:174] setting up certificates
	I0311 20:33:28.697809   33198 provision.go:84] configureAuth start
	I0311 20:33:28.697817   33198 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:33:28.698163   33198 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:33:28.700459   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.700906   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.700933   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.701070   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.702846   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.703209   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.703229   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.703398   33198 provision.go:143] copyHostCerts
	I0311 20:33:28.703427   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:33:28.703469   33198 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:33:28.703481   33198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:33:28.703557   33198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:33:28.703660   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:33:28.703683   33198 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:33:28.703690   33198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:33:28.703730   33198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:33:28.703838   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:33:28.703864   33198 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:33:28.703870   33198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:33:28.703906   33198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:33:28.703970   33198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.ha-834040 san=[127.0.0.1 192.168.39.128 ha-834040 localhost minikube]
	I0311 20:33:28.852220   33198 provision.go:177] copyRemoteCerts
	I0311 20:33:28.852285   33198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:33:28.852312   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.854832   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.855243   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.855273   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.855478   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:28.855665   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.855834   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:28.855983   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:33:28.936281   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:33:28.936359   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0311 20:33:28.966996   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:33:28.967052   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 20:33:28.996023   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:33:28.996085   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:33:29.024302   33198 provision.go:87] duration metric: took 326.482478ms to configureAuth
	I0311 20:33:29.024326   33198 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:33:29.024523   33198 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:33:29.024615   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:29.027075   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:29.027425   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:29.027450   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:29.027591   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:29.027763   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:29.027910   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:29.028040   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:29.028212   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:33:29.028368   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:33:29.028384   33198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:34:59.909468   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:34:59.909490   33198 machine.go:97] duration metric: took 1m31.576554147s to provisionDockerMachine
	I0311 20:34:59.909501   33198 start.go:293] postStartSetup for "ha-834040" (driver="kvm2")
	I0311 20:34:59.909511   33198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:34:59.909524   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:34:59.909801   33198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:34:59.909860   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:34:59.912858   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:34:59.913279   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:34:59.913304   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:34:59.913443   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:34:59.913639   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:34:59.913827   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:34:59.913965   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:34:59.996839   33198 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:35:00.002329   33198 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:35:00.002351   33198 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:35:00.002404   33198 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:35:00.002469   33198 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:35:00.002479   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:35:00.002554   33198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:35:00.016406   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:35:00.045260   33198 start.go:296] duration metric: took 135.744546ms for postStartSetup
	I0311 20:35:00.045304   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.045611   33198 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0311 20:35:00.045640   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:35:00.047965   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.048370   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.048396   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.048541   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:35:00.048723   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.048893   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:35:00.049048   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	W0311 20:35:00.131991   33198 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0311 20:35:00.132023   33198 fix.go:56] duration metric: took 1m31.818746443s for fixHost
	I0311 20:35:00.132055   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:35:00.134403   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.134823   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.134853   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.135032   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:35:00.135231   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.135406   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.135549   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:35:00.135705   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:35:00.135869   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:35:00.135880   33198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:35:00.242337   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710189300.206863241
	
	I0311 20:35:00.242356   33198 fix.go:216] guest clock: 1710189300.206863241
	I0311 20:35:00.242363   33198 fix.go:229] Guest: 2024-03-11 20:35:00.206863241 +0000 UTC Remote: 2024-03-11 20:35:00.132031274 +0000 UTC m=+91.958740141 (delta=74.831967ms)
	I0311 20:35:00.242391   33198 fix.go:200] guest clock delta is within tolerance: 74.831967ms
	I0311 20:35:00.242397   33198 start.go:83] releasing machines lock for "ha-834040", held for 1m31.929132911s
	I0311 20:35:00.242415   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.242677   33198 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:35:00.245079   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.245482   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.245527   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.245641   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.246235   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.246410   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.246497   33198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:35:00.246542   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:35:00.246639   33198 ssh_runner.go:195] Run: cat /version.json
	I0311 20:35:00.246665   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:35:00.249177   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.249467   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.249561   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.249603   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.249706   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:35:00.249849   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.249858   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.249877   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.250031   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:35:00.250032   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:35:00.250220   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:35:00.250246   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.250393   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:35:00.250527   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:35:00.326624   33198 ssh_runner.go:195] Run: systemctl --version
	I0311 20:35:00.352392   33198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:35:00.522734   33198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:35:00.530063   33198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:35:00.530138   33198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:35:00.541331   33198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 20:35:00.541349   33198 start.go:494] detecting cgroup driver to use...
	I0311 20:35:00.541417   33198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:35:00.559256   33198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:35:00.574277   33198 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:35:00.574328   33198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:35:00.590177   33198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:35:00.605002   33198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:35:00.767373   33198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:35:00.927704   33198 docker.go:233] disabling docker service ...
	I0311 20:35:00.927758   33198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:35:00.947407   33198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:35:00.962590   33198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:35:01.115537   33198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:35:01.269146   33198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:35:01.284696   33198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:35:01.305768   33198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:35:01.305838   33198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:35:01.319388   33198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:35:01.319441   33198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:35:01.332232   33198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:35:01.344043   33198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:35:01.356143   33198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:35:01.368840   33198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:35:01.379804   33198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:35:01.390913   33198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:35:01.549351   33198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:35:01.923429   33198 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:35:01.923488   33198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:35:01.928835   33198 start.go:562] Will wait 60s for crictl version
	I0311 20:35:01.928891   33198 ssh_runner.go:195] Run: which crictl
	I0311 20:35:01.933350   33198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:35:01.987937   33198 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:35:01.988026   33198 ssh_runner.go:195] Run: crio --version
	I0311 20:35:02.019165   33198 ssh_runner.go:195] Run: crio --version
	I0311 20:35:02.052945   33198 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:35:02.054221   33198 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:35:02.056782   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:02.057164   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:02.057186   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:02.057386   33198 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:35:02.062545   33198 kubeadm.go:877] updating cluster {Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 20:35:02.062686   33198 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:35:02.062732   33198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:35:02.115425   33198 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:35:02.115446   33198 crio.go:415] Images already preloaded, skipping extraction
	I0311 20:35:02.115486   33198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:35:02.157541   33198 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:35:02.157563   33198 cache_images.go:84] Images are preloaded, skipping loading
	I0311 20:35:02.157573   33198 kubeadm.go:928] updating node { 192.168.39.128 8443 v1.28.4 crio true true} ...
	I0311 20:35:02.157696   33198 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-834040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:35:02.157778   33198 ssh_runner.go:195] Run: crio config
	I0311 20:35:02.207212   33198 cni.go:84] Creating CNI manager for ""
	I0311 20:35:02.207243   33198 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0311 20:35:02.207256   33198 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 20:35:02.207275   33198 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-834040 NodeName:ha-834040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 20:35:02.207435   33198 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-834040"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 20:35:02.207461   33198 kube-vip.go:101] generating kube-vip config ...
	I0311 20:35:02.207506   33198 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 20:35:02.207549   33198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:35:02.218800   33198 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 20:35:02.218867   33198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0311 20:35:02.229881   33198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0311 20:35:02.248776   33198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:35:02.267701   33198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0311 20:35:02.285978   33198 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 20:35:02.304380   33198 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0311 20:35:02.308680   33198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:35:02.455235   33198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:35:02.471506   33198 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040 for IP: 192.168.39.128
	I0311 20:35:02.471529   33198 certs.go:194] generating shared ca certs ...
	I0311 20:35:02.471548   33198 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:35:02.471727   33198 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:35:02.471793   33198 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:35:02.471807   33198 certs.go:256] generating profile certs ...
	I0311 20:35:02.471896   33198 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key
	I0311 20:35:02.471930   33198 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.8b7c4a26
	I0311 20:35:02.471947   33198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.8b7c4a26 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.101 192.168.39.40 192.168.39.254]
	I0311 20:35:02.632897   33198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.8b7c4a26 ...
	I0311 20:35:02.632923   33198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.8b7c4a26: {Name:mk8ed2f3c0d8195405e2faef9275c0bb79ff2ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:35:02.633080   33198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.8b7c4a26 ...
	I0311 20:35:02.633091   33198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.8b7c4a26: {Name:mkfe4e256c37c321648816748aaee4cf776ec925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:35:02.633160   33198 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.8b7c4a26 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt
	I0311 20:35:02.633304   33198 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.8b7c4a26 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key
	I0311 20:35:02.633427   33198 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key
	I0311 20:35:02.633442   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:35:02.633453   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:35:02.633464   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:35:02.633474   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:35:02.633483   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:35:02.633492   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:35:02.633502   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:35:02.633512   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:35:02.633557   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:35:02.633583   33198 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:35:02.633592   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:35:02.633615   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:35:02.633664   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:35:02.633689   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:35:02.633724   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:35:02.633748   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:35:02.633761   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:35:02.633774   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:35:02.634276   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:35:02.662882   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:35:02.689474   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:35:02.715652   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:35:02.743547   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0311 20:35:02.769956   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 20:35:02.797999   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:35:02.825764   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:35:02.852129   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:35:02.877568   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:35:02.903812   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:35:02.929369   33198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 20:35:02.948373   33198 ssh_runner.go:195] Run: openssl version
	I0311 20:35:02.954891   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:35:02.967173   33198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:35:02.972176   33198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:35:02.972229   33198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:35:02.978615   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:35:02.994538   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:35:03.006744   33198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:35:03.011794   33198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:35:03.011847   33198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:35:03.018142   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:35:03.029227   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:35:03.041607   33198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:35:03.047089   33198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:35:03.047138   33198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:35:03.053523   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:35:03.065109   33198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:35:03.070392   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 20:35:03.076658   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 20:35:03.082922   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 20:35:03.089032   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 20:35:03.095166   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 20:35:03.101368   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 20:35:03.108167   33198 kubeadm.go:391] StartCluster: {Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:35:03.108268   33198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 20:35:03.108308   33198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 20:35:03.156039   33198 cri.go:89] found id: "6bdfe67eb7a848f6a0d969a29c20ba9575264d3254289f3e69d76d2e256f0a23"
	I0311 20:35:03.156063   33198 cri.go:89] found id: "c5745481a2bd303d21ee4b5d13b5667eba96af6aba1c646e8cac99a1390a8572"
	I0311 20:35:03.156068   33198 cri.go:89] found id: "b1a7df27a0f7c49fa96b4dfc438c4814e0c224f8f2f6bba553866403916ca5c1"
	I0311 20:35:03.156074   33198 cri.go:89] found id: "b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f"
	I0311 20:35:03.156078   33198 cri.go:89] found id: "afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105"
	I0311 20:35:03.156084   33198 cri.go:89] found id: "48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15"
	I0311 20:35:03.156088   33198 cri.go:89] found id: "7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450"
	I0311 20:35:03.156092   33198 cri.go:89] found id: "6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc"
	I0311 20:35:03.156096   33198 cri.go:89] found id: "ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25"
	I0311 20:35:03.156103   33198 cri.go:89] found id: "4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1"
	I0311 20:35:03.156107   33198 cri.go:89] found id: "abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8"
	I0311 20:35:03.156111   33198 cri.go:89] found id: "4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861"
	I0311 20:35:03.156114   33198 cri.go:89] found id: "d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944"
	I0311 20:35:03.156118   33198 cri.go:89] found id: ""
	I0311 20:35:03.156170   33198 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.733029944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189469733000733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=794b8bb5-94ee-4fac-bc82-ddfd62bad439 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.733523610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50a06a24-0802-4953-b016-57fb8056efca name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.733656561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50a06a24-0802-4953-b016-57fb8056efca name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.734203194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c7f58f0ecba2abb4331fff9dd84f1caaada79b61f3e7d55d8f0d7306667734,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710189437599719439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710189382610029136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710189350602547776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710189349597577731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710189347596792761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3832accc496d3e6679bd39117f2f8e7c441c6a002c9e64c0ec10c3e20a2e2a2a,PodSandboxId:c2780ed8082241d2d00f6529cc7d2c01776909d9f84c2c0e4731e4006bc0669b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710189338938216376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263,PodSandboxId:9db00ddf870f0dc290aff114bb00eb43547e46a8d8b29ae944a1117328fce69e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710189306578156100,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:b60c1c2efa76c17a9d1751e8bb3b16ca171899c4bf68a80acf6925f84e1a7c55,PodSandboxId:a0d58ca9155034374fd9f12edbc5e58f99162c267563c3bb25ea5a7c7e7a2772,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710189306053992091,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8d445
c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da,PodSandboxId:feacd92c56223e2e8bf7543d1d93913b6ca8e364e24f66932eec768f2c500882,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189306008448313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d,PodSandboxId:5f09ca01a653a1f54a6736c0ec543c45b9c4b0b69395e09fbde14c7976d5970b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710189305860476552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710189305790540189,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802,PodSandboxId:48d5e3492f37bdc2894837aa00c8d95665b2b817628e3ebc846b9e22d9a772bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710189305777401386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44,PodSandboxId:31077b778010bae070fdaba2a7e62491855b23640273a208df747f420acc6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189305663521866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710189305534759912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map
[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710189305490975196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kube
rnetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710189114601156633,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710188810909713860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626343540719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626308848252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710188621618774474,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Crea
tedAt:1710188600842160862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710188600790791703,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50a06a24-0802-4953-b016-57fb8056efca name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.785486613Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcc05e62-b96e-4726-800c-86c8c2cc2472 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.785622255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcc05e62-b96e-4726-800c-86c8c2cc2472 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.787483848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=892bc47f-974c-4598-9a4e-7ea138a1b366 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.787949174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189469787927539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=892bc47f-974c-4598-9a4e-7ea138a1b366 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.788842490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2ddfb75-76db-48c9-992b-67cedb786c4c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.788928620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2ddfb75-76db-48c9-992b-67cedb786c4c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.789523091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c7f58f0ecba2abb4331fff9dd84f1caaada79b61f3e7d55d8f0d7306667734,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710189437599719439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710189382610029136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710189350602547776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710189349597577731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710189347596792761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3832accc496d3e6679bd39117f2f8e7c441c6a002c9e64c0ec10c3e20a2e2a2a,PodSandboxId:c2780ed8082241d2d00f6529cc7d2c01776909d9f84c2c0e4731e4006bc0669b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710189338938216376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263,PodSandboxId:9db00ddf870f0dc290aff114bb00eb43547e46a8d8b29ae944a1117328fce69e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710189306578156100,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:b60c1c2efa76c17a9d1751e8bb3b16ca171899c4bf68a80acf6925f84e1a7c55,PodSandboxId:a0d58ca9155034374fd9f12edbc5e58f99162c267563c3bb25ea5a7c7e7a2772,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710189306053992091,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8d445
c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da,PodSandboxId:feacd92c56223e2e8bf7543d1d93913b6ca8e364e24f66932eec768f2c500882,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189306008448313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d,PodSandboxId:5f09ca01a653a1f54a6736c0ec543c45b9c4b0b69395e09fbde14c7976d5970b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710189305860476552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710189305790540189,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802,PodSandboxId:48d5e3492f37bdc2894837aa00c8d95665b2b817628e3ebc846b9e22d9a772bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710189305777401386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44,PodSandboxId:31077b778010bae070fdaba2a7e62491855b23640273a208df747f420acc6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189305663521866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710189305534759912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map
[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710189305490975196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kube
rnetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710189114601156633,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710188810909713860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626343540719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626308848252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710188621618774474,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Crea
tedAt:1710188600842160862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710188600790791703,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2ddfb75-76db-48c9-992b-67cedb786c4c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.843617188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a08fb446-c4f4-453e-bd4a-6d547c1b0c46 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.843753696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a08fb446-c4f4-453e-bd4a-6d547c1b0c46 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.851827542Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c694df8-5c85-4b92-aa3f-800585e318a4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.853422735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189469853327382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c694df8-5c85-4b92-aa3f-800585e318a4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.854483094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3a29494-7075-40d4-84cb-367111a9f7b6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.854567535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3a29494-7075-40d4-84cb-367111a9f7b6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.854972242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c7f58f0ecba2abb4331fff9dd84f1caaada79b61f3e7d55d8f0d7306667734,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710189437599719439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710189382610029136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710189350602547776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710189349597577731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710189347596792761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3832accc496d3e6679bd39117f2f8e7c441c6a002c9e64c0ec10c3e20a2e2a2a,PodSandboxId:c2780ed8082241d2d00f6529cc7d2c01776909d9f84c2c0e4731e4006bc0669b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710189338938216376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263,PodSandboxId:9db00ddf870f0dc290aff114bb00eb43547e46a8d8b29ae944a1117328fce69e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710189306578156100,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:b60c1c2efa76c17a9d1751e8bb3b16ca171899c4bf68a80acf6925f84e1a7c55,PodSandboxId:a0d58ca9155034374fd9f12edbc5e58f99162c267563c3bb25ea5a7c7e7a2772,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710189306053992091,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8d445
c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da,PodSandboxId:feacd92c56223e2e8bf7543d1d93913b6ca8e364e24f66932eec768f2c500882,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189306008448313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d,PodSandboxId:5f09ca01a653a1f54a6736c0ec543c45b9c4b0b69395e09fbde14c7976d5970b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710189305860476552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710189305790540189,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802,PodSandboxId:48d5e3492f37bdc2894837aa00c8d95665b2b817628e3ebc846b9e22d9a772bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710189305777401386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44,PodSandboxId:31077b778010bae070fdaba2a7e62491855b23640273a208df747f420acc6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189305663521866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710189305534759912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map
[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710189305490975196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kube
rnetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710189114601156633,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710188810909713860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626343540719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626308848252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710188621618774474,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Crea
tedAt:1710188600842160862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710188600790791703,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3a29494-7075-40d4-84cb-367111a9f7b6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.909505490Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e7d9cee-ce6c-4e7a-bc6c-83daae1c9605 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.909633776Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e7d9cee-ce6c-4e7a-bc6c-83daae1c9605 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.911804817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a169ce19-1608-45f5-9eca-492646a1d581 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.912392213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189469912367923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a169ce19-1608-45f5-9eca-492646a1d581 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.912936437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fadcfdc5-1f95-4a78-beed-f40ff1251f5f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.913031656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fadcfdc5-1f95-4a78-beed-f40ff1251f5f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:37:49 ha-834040 crio[3934]: time="2024-03-11 20:37:49.913660829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c7f58f0ecba2abb4331fff9dd84f1caaada79b61f3e7d55d8f0d7306667734,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710189437599719439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710189382610029136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710189350602547776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710189349597577731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710189347596792761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3832accc496d3e6679bd39117f2f8e7c441c6a002c9e64c0ec10c3e20a2e2a2a,PodSandboxId:c2780ed8082241d2d00f6529cc7d2c01776909d9f84c2c0e4731e4006bc0669b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710189338938216376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263,PodSandboxId:9db00ddf870f0dc290aff114bb00eb43547e46a8d8b29ae944a1117328fce69e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710189306578156100,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:b60c1c2efa76c17a9d1751e8bb3b16ca171899c4bf68a80acf6925f84e1a7c55,PodSandboxId:a0d58ca9155034374fd9f12edbc5e58f99162c267563c3bb25ea5a7c7e7a2772,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710189306053992091,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8d445
c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da,PodSandboxId:feacd92c56223e2e8bf7543d1d93913b6ca8e364e24f66932eec768f2c500882,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189306008448313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d,PodSandboxId:5f09ca01a653a1f54a6736c0ec543c45b9c4b0b69395e09fbde14c7976d5970b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710189305860476552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710189305790540189,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802,PodSandboxId:48d5e3492f37bdc2894837aa00c8d95665b2b817628e3ebc846b9e22d9a772bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710189305777401386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44,PodSandboxId:31077b778010bae070fdaba2a7e62491855b23640273a208df747f420acc6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189305663521866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710189305534759912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map
[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710189305490975196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kube
rnetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710189114601156633,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710188810909713860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626343540719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626308848252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710188621618774474,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Crea
tedAt:1710188600842160862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710188600790791703,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fadcfdc5-1f95-4a78-beed-f40ff1251f5f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d6c7f58f0ecba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      32 seconds ago       Running             storage-provisioner       5                   6ef704c8e70a9       storage-provisioner
	5a4fa8160f6f5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   bfa23d82d4c2e       kindnet-bw656
	a20030032ebd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       4                   6ef704c8e70a9       storage-provisioner
	4d12665eb117c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Running             kube-apiserver            3                   85b6fb2e7a9fe       kube-apiserver-ha-834040
	a61a17645171f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Running             kube-controller-manager   2                   4fc559c46ae67       kube-controller-manager-ha-834040
	3832accc496d3       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   c2780ed808224       busybox-5b5d89c9d6-d62cw
	f9876035a6710       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   9db00ddf870f0       kube-proxy-h8svv
	b60c1c2efa76c       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  3                   a0d58ca915503       kube-vip-ha-834040
	da8d445c7477e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   feacd92c56223       coredns-5dd5756b68-d6f2x
	295775061cd27       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   5f09ca01a653a       kube-scheduler-ha-834040
	eaefaf7c41e62       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   bfa23d82d4c2e       kindnet-bw656
	107118ba00c2d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   48d5e3492f37b       etcd-ha-834040
	a6b7ee8bc2fdc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   31077b778010b       coredns-5dd5756b68-kq47h
	f1a69a51bad87       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   4fc559c46ae67       kube-controller-manager-ha-834040
	9f072720516b7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   85b6fb2e7a9fe       kube-apiserver-ha-834040
	b96396c0e35ce       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago        Exited              kube-vip                  2                   dcb18e5f12de1       kube-vip-ha-834040
	251e9f2d7df5c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   417164b9b0cb4       busybox-5b5d89c9d6-d62cw
	7be345e0f22ca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 minutes ago       Exited              coredns                   0                   4860ab9172968       coredns-5dd5756b68-kq47h
	6926d89f93fa7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 minutes ago       Exited              coredns                   0                   94384bd2f8c98       coredns-5dd5756b68-d6f2x
	ab5ff27a1d4cb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      14 minutes ago       Exited              kube-proxy                0                   a9e018e6df6e7       kube-proxy-h8svv
	4395af23a1752       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      14 minutes ago       Exited              etcd                      0                   3e8bbccfbf388       etcd-ha-834040
	4b273e6fedf1a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      14 minutes ago       Exited              kube-scheduler            0                   85d4eab358f29       kube-scheduler-ha-834040
	
	
	==> coredns [6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc] <==
	[INFO] 10.244.0.4:34351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182428s
	[INFO] 10.244.1.2:54939 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000278877s
	[INFO] 10.244.1.2:37033 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194177s
	[INFO] 10.244.1.2:37510 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190608s
	[INFO] 10.244.2.2:41536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108104s
	[INFO] 10.244.2.2:41561 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122082s
	[INFO] 10.244.0.4:42660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221566s
	[INFO] 10.244.0.4:53159 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000188136s
	[INFO] 10.244.0.4:41046 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100215s
	[INFO] 10.244.0.4:50387 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176539s
	[INFO] 10.244.1.2:54773 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120996s
	[INFO] 10.244.1.2:51952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119653s
	[INFO] 10.244.2.2:59116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134078s
	[INFO] 10.244.2.2:47917 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128001s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1850&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1808&timeout=8m19s&timeoutSeconds=499&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1811&timeout=8m53s&timeoutSeconds=533&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450] <==
	[INFO] 10.244.0.4:58455 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118841s
	[INFO] 10.244.0.4:49345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003481053s
	[INFO] 10.244.0.4:56716 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187984s
	[INFO] 10.244.0.4:35412 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160258s
	[INFO] 10.244.1.2:56957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150599s
	[INFO] 10.244.1.2:53790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450755s
	[INFO] 10.244.1.2:53927 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207107s
	[INFO] 10.244.2.2:55011 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744357s
	[INFO] 10.244.2.2:59931 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000316475s
	[INFO] 10.244.2.2:52694 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184762s
	[INFO] 10.244.2.2:51472 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080603s
	[INFO] 10.244.0.4:33893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185444s
	[INFO] 10.244.0.4:54135 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072181s
	[INFO] 10.244.1.2:36921 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189721s
	[INFO] 10.244.2.2:60407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015337s
	[INFO] 10.244.2.2:45057 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177157s
	[INFO] 10.244.1.2:52652 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273969s
	[INFO] 10.244.1.2:41042 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160192s
	[INFO] 10.244.2.2:55743 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233222s
	[INFO] 10.244.2.2:43090 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000228333s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48351 - 26104 "HINFO IN 7964281783160883336.3331880714538953204. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009735234s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37560->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [da8d445c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53638 - 4493 "HINFO IN 7144604500221555542.3321365182851520079. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008485411s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-834040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T20_23_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:37:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:35:53 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:35:53 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:35:53 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:35:53 +0000   Mon, 11 Mar 2024 20:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-834040
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6cb0aa00d5a4d388da50e20e0a9ccef
	  System UUID:                f6cb0aa0-0d5a-4d38-8da5-0e20e0a9ccef
	  Boot ID:                    47b6723c-3999-42a9-a19b-9f1c67aaecb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-d62cw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-d6f2x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5dd5756b68-kq47h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-834040                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-bw656                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-834040             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-834040    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-h8svv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-834040             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-834040                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 14m    kube-proxy       
	  Normal   Starting                 118s   kube-proxy       
	  Normal   NodeHasSufficientPID     14m    kubelet          Node ha-834040 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m    kubelet          Node ha-834040 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m    kubelet          Node ha-834040 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  14m    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           14m    node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal   NodeReady                14m    kubelet          Node ha-834040 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Warning  ContainerGCFailed        3m23s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           108s   node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal   RegisteredNode           106s   node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal   RegisteredNode           35s    node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	
	
	Name:               ha-834040-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_24_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:24:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:37:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:36:34 +0000   Mon, 11 Mar 2024 20:35:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:36:34 +0000   Mon, 11 Mar 2024 20:35:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:36:34 +0000   Mon, 11 Mar 2024 20:35:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:36:34 +0000   Mon, 11 Mar 2024 20:35:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-834040-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d932b403e92c478480bfc9080f018c7a
	  System UUID:                d932b403-e92c-4784-80bf-c9080f018c7a
	  Boot ID:                    ea703ef6-2ef0-497e-8b2c-6615b5191cee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-h9jx5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-834040-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-rqcq6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-834040-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-834040-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-dsjx4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-834040-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-834040-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 91s                    kube-proxy       
	  Normal  RegisteredNode           13m                    node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  NodeNotReady             9m19s                  node-controller  Node ha-834040-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  2m25s (x8 over 2m25s)  kubelet          Node ha-834040-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m25s (x8 over 2m25s)  kubelet          Node ha-834040-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s (x7 over 2m25s)  kubelet          Node ha-834040-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                   node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode           106s                   node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode           35s                    node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	
	
	Name:               ha-834040-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_26_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:26:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:37:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:37:11 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:37:11 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:37:11 +0000   Mon, 11 Mar 2024 20:26:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:37:11 +0000   Mon, 11 Mar 2024 20:26:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-834040-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6ff34b6936e4e2fada32a020c96ac8f
	  System UUID:                e6ff34b6-936e-4e2f-ada3-2a020c96ac8f
	  Boot ID:                    08b6dcc3-1526-42a3-85bf-eb8f8eb76171
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-mx5b4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-834040-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-cf888                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-834040-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-834040-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-4kkwc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-834040-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-834040-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 49s   kube-proxy       
	  Normal   RegisteredNode           11m   node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal   RegisteredNode           11m   node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal   RegisteredNode           11m   node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal   RegisteredNode           108s  node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal   RegisteredNode           106s  node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	  Normal   Starting                 70s   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  70s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  70s   kubelet          Node ha-834040-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s   kubelet          Node ha-834040-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s   kubelet          Node ha-834040-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 70s   kubelet          Node ha-834040-m03 has been rebooted, boot id: 08b6dcc3-1526-42a3-85bf-eb8f8eb76171
	  Normal   RegisteredNode           35s   node-controller  Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller
	
	
	Name:               ha-834040-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_27_30_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:27:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:37:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:37:42 +0000   Mon, 11 Mar 2024 20:37:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:37:42 +0000   Mon, 11 Mar 2024 20:37:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:37:42 +0000   Mon, 11 Mar 2024 20:37:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:37:42 +0000   Mon, 11 Mar 2024 20:37:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-834040-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 01d975a4d97b45958b00e8cebd68bf34
	  System UUID:                01d975a4-d97b-4595-8b00-e8cebd68bf34
	  Boot ID:                    b8f29019-7e0c-455d-b088-b47ed3621612
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gdbjb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-wc99r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-834040-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-834040-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-834040-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-834040-m04 status is now: NodeReady
	  Normal   RegisteredNode           108s               node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   RegisteredNode           106s               node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   NodeNotReady             68s                node-controller  Node ha-834040-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-834040-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-834040-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-834040-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-834040-m04 has been rebooted, boot id: b8f29019-7e0c-455d-b088-b47ed3621612
	  Normal   NodeReady                8s                 kubelet          Node ha-834040-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.744921] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.061444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067061] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.157638] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.161215] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.262542] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +5.181266] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.062600] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.584713] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.482512] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.376234] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.096131] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.894025] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.119032] kauditd_printk_skb: 58 callbacks suppressed
	[Mar11 20:24] kauditd_printk_skb: 6 callbacks suppressed
	[Mar11 20:35] systemd-fstab-generator[3840]: Ignoring "noauto" option for root device
	[  +0.158347] systemd-fstab-generator[3852]: Ignoring "noauto" option for root device
	[  +0.196962] systemd-fstab-generator[3866]: Ignoring "noauto" option for root device
	[  +0.150632] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[  +0.268673] systemd-fstab-generator[3902]: Ignoring "noauto" option for root device
	[  +0.922138] systemd-fstab-generator[4024]: Ignoring "noauto" option for root device
	[  +3.522109] kauditd_printk_skb: 175 callbacks suppressed
	[ +21.790982] kauditd_printk_skb: 41 callbacks suppressed
	[ +25.828492] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802] <==
	{"level":"warn","ts":"2024-03-11T20:36:57.128761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.170994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-03-11T20:36:57.128868Z","caller":"traceutil/trace.go:171","msg":"trace[960445305] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2338; }","duration":"151.370494ms","start":"2024-03-11T20:36:56.977482Z","end":"2024-03-11T20:36:57.128853Z","steps":["trace[960445305] 'agreement among raft nodes before linearized reading'  (duration: 151.129285ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:36:57.509333Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"7f2e3f2197a91816","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"174.355733ms"}
	{"level":"warn","ts":"2024-03-11T20:36:57.509464Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"49bf4fb7f029b9bd","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"174.493286ms"}
	{"level":"info","ts":"2024-03-11T20:36:57.509555Z","caller":"traceutil/trace.go:171","msg":"trace[1436968827] linearizableReadLoop","detail":"{readStateIndex:2743; appliedIndex:2743; }","duration":"374.593334ms","start":"2024-03-11T20:36:57.134949Z","end":"2024-03-11T20:36:57.509542Z","steps":["trace[1436968827] 'read index received'  (duration: 374.583135ms)","trace[1436968827] 'applied index is now lower than readState.Index'  (duration: 4.283µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:36:57.554833Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"419.87811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-834040-m03\" ","response":"range_response_count:1 size:3673"}
	{"level":"info","ts":"2024-03-11T20:36:57.554912Z","caller":"traceutil/trace.go:171","msg":"trace[943247302] range","detail":"{range_begin:/registry/minions/ha-834040-m03; range_end:; response_count:1; response_revision:2338; }","duration":"419.966879ms","start":"2024-03-11T20:36:57.134928Z","end":"2024-03-11T20:36:57.554894Z","steps":["trace[943247302] 'agreement among raft nodes before linearized reading'  (duration: 374.66636ms)","trace[943247302] 'range keys from in-memory index tree'  (duration: 45.180061ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:36:57.554951Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T20:36:57.134916Z","time spent":"420.023417ms","remote":"127.0.0.1:51162","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":3697,"request content":"key:\"/registry/minions/ha-834040-m03\" "}
	{"level":"info","ts":"2024-03-11T20:36:57.555339Z","caller":"traceutil/trace.go:171","msg":"trace[1054364633] transaction","detail":"{read_only:false; response_revision:2339; number_of_response:1; }","duration":"419.769953ms","start":"2024-03-11T20:36:57.135558Z","end":"2024-03-11T20:36:57.555328Z","steps":["trace[1054364633] 'process raft request'  (duration: 374.289809ms)","trace[1054364633] 'compare'  (duration: 45.340832ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:36:57.557939Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T20:36:57.135544Z","time spent":"419.855689ms","remote":"127.0.0.1:51236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2337 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"info","ts":"2024-03-11T20:36:58.059588Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:36:58.061431Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:36:58.063327Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:36:58.095909Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fa515506e66f6916","to":"7f2e3f2197a91816","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-11T20:36:58.096017Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:36:58.102804Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fa515506e66f6916","to":"7f2e3f2197a91816","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-11T20:36:58.103Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"warn","ts":"2024-03-11T20:37:47.377907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.831469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-03-11T20:37:47.378058Z","caller":"traceutil/trace.go:171","msg":"trace[775851496] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2508; }","duration":"117.01008ms","start":"2024-03-11T20:37:47.261022Z","end":"2024-03-11T20:37:47.378032Z","steps":["trace[775851496] 'range keys from in-memory index tree'  (duration: 115.415427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:37:47.377999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.016614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2895"}
	{"level":"info","ts":"2024-03-11T20:37:47.378338Z","caller":"traceutil/trace.go:171","msg":"trace[2049817232] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:2508; }","duration":"218.347978ms","start":"2024-03-11T20:37:47.159977Z","end":"2024-03-11T20:37:47.378325Z","steps":["trace[2049817232] 'range keys from in-memory index tree'  (duration: 216.668836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:37:47.378411Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.563254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-03-11T20:37:47.378479Z","caller":"traceutil/trace.go:171","msg":"trace[30297874] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2508; }","duration":"128.633338ms","start":"2024-03-11T20:37:47.249836Z","end":"2024-03-11T20:37:47.37847Z","steps":["trace[30297874] 'range keys from in-memory index tree'  (duration: 126.964454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:37:47.377952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.560527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-wc99r\" ","response":"range_response_count:1 size:4429"}
	{"level":"info","ts":"2024-03-11T20:37:47.378635Z","caller":"traceutil/trace.go:171","msg":"trace[1801264300] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-wc99r; range_end:; response_count:1; response_revision:2508; }","duration":"143.255886ms","start":"2024-03-11T20:37:47.23537Z","end":"2024-03-11T20:37:47.378626Z","steps":["trace[1801264300] 'range keys from in-memory index tree'  (duration: 141.291995ms)"],"step_count":1}
	
	
	==> etcd [4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1] <==
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-11T20:33:29.218634Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.128:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:33:29.218686Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.128:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-11T20:33:29.218746Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fa515506e66f6916","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-11T20:33:29.218905Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.218958Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219019Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219219Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219301Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219361Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219394Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.21942Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219448Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219507Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219593Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219648Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219697Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.21973Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.222501Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2024-03-11T20:33:29.222652Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2024-03-11T20:33:29.222702Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-834040","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.128:2380"],"advertise-client-urls":["https://192.168.39.128:2379"]}
	
	
	==> kernel <==
	 20:37:50 up 15 min,  0 users,  load average: 0.09, 0.25, 0.25
	Linux ha-834040 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5] <==
	I0311 20:37:13.851253       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:37:23.857872       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:37:23.857918       1 main.go:227] handling current node
	I0311 20:37:23.857933       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:37:23.857939       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:37:23.858054       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:37:23.858146       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:37:23.858248       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:37:23.858281       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:37:33.867389       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:37:33.867522       1 main.go:227] handling current node
	I0311 20:37:33.867567       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:37:33.867591       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:37:33.867760       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:37:33.867798       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:37:33.867906       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:37:33.867949       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:37:43.886299       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:37:43.886364       1 main.go:227] handling current node
	I0311 20:37:43.886375       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:37:43.886380       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:37:43.886490       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0311 20:37:43.886495       1 main.go:250] Node ha-834040-m03 has CIDR [10.244.2.0/24] 
	I0311 20:37:43.886564       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:37:43.886607       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db] <==
	I0311 20:35:06.468700       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0311 20:35:06.468871       1 main.go:107] hostIP = 192.168.39.128
	podIP = 192.168.39.128
	I0311 20:35:06.469036       1 main.go:116] setting mtu 1500 for CNI 
	I0311 20:35:06.469055       1 main.go:146] kindnetd IP family: "ipv4"
	I0311 20:35:06.471740       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0311 20:35:09.585387       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0311 20:35:12.653602       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0311 20:35:15.725968       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0311 20:35:18.797456       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0311 20:35:28.134032       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.45:40812->10.96.0.1:443: read: connection reset by peer
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.45:40812->10.96.0.1:443: read: connection reset by peer
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294] <==
	I0311 20:35:51.891559       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0311 20:35:51.891576       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0311 20:35:51.891596       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0311 20:35:51.891676       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0311 20:35:51.891779       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 20:35:51.976323       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 20:35:51.979479       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 20:35:51.979866       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0311 20:35:51.979933       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0311 20:35:51.982341       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 20:35:51.982405       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 20:35:51.982960       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 20:35:51.987895       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 20:35:51.988003       1 aggregator.go:166] initial CRD sync complete...
	I0311 20:35:51.988042       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 20:35:51.988048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 20:35:51.988054       1 cache.go:39] Caches are synced for autoregister controller
	I0311 20:35:51.997674       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0311 20:35:52.001196       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.40]
	I0311 20:35:52.003331       1 controller.go:624] quota admission added evaluator for: endpoints
	I0311 20:35:52.020640       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0311 20:35:52.026842       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0311 20:35:52.890036       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0311 20:35:53.445228       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.128 192.168.39.40]
	W0311 20:36:03.449214       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.128]
	
	
	==> kube-apiserver [9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d] <==
	I0311 20:35:06.425606       1 options.go:220] external host was not specified, using 192.168.39.128
	I0311 20:35:06.432266       1 server.go:148] Version: v1.28.4
	I0311 20:35:06.432451       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:35:07.102858       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0311 20:35:07.109640       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0311 20:35:07.109867       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0311 20:35:07.110200       1 instance.go:298] Using reconciler: lease
	W0311 20:35:27.101607       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0311 20:35:27.102541       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0311 20:35:27.110843       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800] <==
	I0311 20:36:04.401310       1 event.go:307] "Event occurred" object="ha-834040-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-834040-m03 event: Registered Node ha-834040-m03 in Controller"
	I0311 20:36:04.401342       1 event.go:307] "Event occurred" object="ha-834040-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller"
	I0311 20:36:04.401366       1 event.go:307] "Event occurred" object="ha-834040" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-834040 event: Registered Node ha-834040 in Controller"
	I0311 20:36:04.412596       1 shared_informer.go:318] Caches are synced for deployment
	I0311 20:36:04.416759       1 shared_informer.go:318] Caches are synced for attach detach
	I0311 20:36:04.430063       1 shared_informer.go:318] Caches are synced for disruption
	I0311 20:36:04.461517       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 20:36:04.470881       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 20:36:04.500196       1 shared_informer.go:318] Caches are synced for cronjob
	I0311 20:36:04.914785       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 20:36:04.916024       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 20:36:04.916157       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0311 20:36:06.185671       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-mftkb EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mftkb\": the object has been modified; please apply your changes to the latest version and try again"
	I0311 20:36:06.187378       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0311 20:36:06.187628       1 event.go:298] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5413107a-0dfe-4873-8a45-70b7f861b4cd", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mftkb EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mftkb": the object has been modified; please apply your changes to the latest version and try again
	I0311 20:36:06.210182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.508148ms"
	I0311 20:36:06.210417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.91µs"
	I0311 20:36:06.777349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="67.198µs"
	I0311 20:36:20.876197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.155597ms"
	I0311 20:36:20.876340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="81.763µs"
	I0311 20:36:41.522033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.484557ms"
	I0311 20:36:41.522430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="144.702µs"
	I0311 20:37:02.058548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.782096ms"
	I0311 20:37:02.059621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.142µs"
	I0311 20:37:42.724975       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-834040-m04"
	
	
	==> kube-controller-manager [f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1] <==
	I0311 20:35:07.156754       1 serving.go:348] Generated self-signed cert in-memory
	I0311 20:35:07.868312       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0311 20:35:07.868359       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:35:07.870331       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0311 20:35:07.870460       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 20:35:07.870711       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 20:35:07.870860       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0311 20:35:28.117944       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.128:8443/healthz\": dial tcp 192.168.39.128:8443: connect: connection refused"
	
	
	==> kube-proxy [ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25] <==
	E0311 20:32:03.085770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:03.085707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:03.085820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:10.317586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:10.317685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:10.317586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:10.317717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:10.317800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:10.317860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:21.133545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:21.133653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:21.133546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:21.133687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:24.206833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:24.206967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:39.567055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:39.567347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:42.638253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:42.638350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:51.854404       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:51.854653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:33:07.214528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:33:07.214980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:33:22.574689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:33:22.575060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263] <==
	I0311 20:35:07.930788       1 server_others.go:69] "Using iptables proxy"
	E0311 20:35:10.095535       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:35:13.166997       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:35:16.240564       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:35:22.383285       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:35:34.670318       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	I0311 20:35:52.042967       1 node.go:141] Successfully retrieved node IP: 192.168.39.128
	I0311 20:35:52.085004       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:35:52.085151       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:35:52.087925       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:35:52.088044       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:35:52.088409       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:35:52.088446       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:35:52.089756       1 config.go:188] "Starting service config controller"
	I0311 20:35:52.089826       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:35:52.089906       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:35:52.089938       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:35:52.092214       1 config.go:315] "Starting node config controller"
	I0311 20:35:52.092248       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:35:52.190375       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:35:52.190389       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 20:35:52.193180       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d] <==
	W0311 20:35:43.940341       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.128:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:43.940388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.128:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:43.993341       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.128:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:43.993470       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.128:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:44.659027       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.128:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:44.659188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.128:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:45.908877       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.128:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:45.908960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.128:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:46.500888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.128:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:46.501032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.128:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:46.918884       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.128:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:46.918944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.128:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:47.228369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.128:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:47.228471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.128:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:47.788306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:47.788387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:47.912772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.128:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:47.912840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.128:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:48.112507       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:48.112596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:48.190808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.128:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:48.190879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.128:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:48.582727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:48.582879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	I0311 20:36:09.225023       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861] <==
	W0311 20:33:25.629790       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 20:33:25.629877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 20:33:25.648704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 20:33:25.648759       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 20:33:25.843457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 20:33:25.843537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 20:33:25.901179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 20:33:25.901263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 20:33:26.067521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 20:33:26.067582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 20:33:26.524530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 20:33:26.524719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 20:33:26.898799       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 20:33:26.898827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 20:33:27.088978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 20:33:27.089038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 20:33:27.243140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 20:33:27.243239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 20:33:27.393809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 20:33:27.393886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 20:33:27.560746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 20:33:27.560958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 20:33:27.984255       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:33:27.984310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:33:29.142305       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 11 20:36:08 ha-834040 kubelet[1373]: I0311 20:36:08.578444    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:36:08 ha-834040 kubelet[1373]: E0311 20:36:08.579156    1373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbc64228-86a0-4e0c-9eef-f4644439ca13)\"" pod="kube-system/storage-provisioner" podUID="bbc64228-86a0-4e0c-9eef-f4644439ca13"
	Mar 11 20:36:11 ha-834040 kubelet[1373]: I0311 20:36:11.578397    1373 scope.go:117] "RemoveContainer" containerID="eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db"
	Mar 11 20:36:11 ha-834040 kubelet[1373]: E0311 20:36:11.578717    1373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-bw656_kube-system(edb13135-e5b5-46df-922e-5ebfb444c219)\"" pod="kube-system/kindnet-bw656" podUID="edb13135-e5b5-46df-922e-5ebfb444c219"
	Mar 11 20:36:18 ha-834040 kubelet[1373]: I0311 20:36:18.426516    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-d62cw" podStartSLOduration=569.44741673 podCreationTimestamp="2024-03-11 20:26:48 +0000 UTC" firstStartedPulling="2024-03-11 20:26:49.916651431 +0000 UTC m=+202.510928325" lastFinishedPulling="2024-03-11 20:26:50.895636205 +0000 UTC m=+203.489913101" observedRunningTime="2024-03-11 20:26:51.569851673 +0000 UTC m=+204.164128588" watchObservedRunningTime="2024-03-11 20:36:18.426401506 +0000 UTC m=+771.020678420"
	Mar 11 20:36:20 ha-834040 kubelet[1373]: I0311 20:36:20.578341    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:36:20 ha-834040 kubelet[1373]: E0311 20:36:20.578660    1373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbc64228-86a0-4e0c-9eef-f4644439ca13)\"" pod="kube-system/storage-provisioner" podUID="bbc64228-86a0-4e0c-9eef-f4644439ca13"
	Mar 11 20:36:22 ha-834040 kubelet[1373]: I0311 20:36:22.577805    1373 scope.go:117] "RemoveContainer" containerID="eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db"
	Mar 11 20:36:27 ha-834040 kubelet[1373]: E0311 20:36:27.614321    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:36:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:36:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:36:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:36:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:36:35 ha-834040 kubelet[1373]: I0311 20:36:35.581880    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:36:35 ha-834040 kubelet[1373]: E0311 20:36:35.582325    1373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbc64228-86a0-4e0c-9eef-f4644439ca13)\"" pod="kube-system/storage-provisioner" podUID="bbc64228-86a0-4e0c-9eef-f4644439ca13"
	Mar 11 20:36:48 ha-834040 kubelet[1373]: I0311 20:36:48.577769    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:36:48 ha-834040 kubelet[1373]: E0311 20:36:48.578168    1373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbc64228-86a0-4e0c-9eef-f4644439ca13)\"" pod="kube-system/storage-provisioner" podUID="bbc64228-86a0-4e0c-9eef-f4644439ca13"
	Mar 11 20:37:03 ha-834040 kubelet[1373]: I0311 20:37:03.578496    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:37:03 ha-834040 kubelet[1373]: E0311 20:37:03.579189    1373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbc64228-86a0-4e0c-9eef-f4644439ca13)\"" pod="kube-system/storage-provisioner" podUID="bbc64228-86a0-4e0c-9eef-f4644439ca13"
	Mar 11 20:37:17 ha-834040 kubelet[1373]: I0311 20:37:17.578310    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:37:27 ha-834040 kubelet[1373]: E0311 20:37:27.614046    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:37:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:37:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:37:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:37:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 20:37:49.406129   34326 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18358-11004/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-834040 -n ha-834040
helpers_test.go:261: (dbg) Run:  kubectl --context ha-834040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (386.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (142.17s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 stop -v=7 --alsologtostderr
E0311 20:39:01.981787   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 stop -v=7 --alsologtostderr: exit status 82 (2m0.483924945s)

                                                
                                                
-- stdout --
	* Stopping node "ha-834040-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:38:09.682548   34716 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:38:09.682659   34716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:38:09.682669   34716 out.go:304] Setting ErrFile to fd 2...
	I0311 20:38:09.682674   34716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:38:09.682931   34716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:38:09.683281   34716 out.go:298] Setting JSON to false
	I0311 20:38:09.683387   34716 mustload.go:65] Loading cluster: ha-834040
	I0311 20:38:09.683781   34716 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:38:09.683889   34716 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:38:09.684076   34716 mustload.go:65] Loading cluster: ha-834040
	I0311 20:38:09.684209   34716 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:38:09.684252   34716 stop.go:39] StopHost: ha-834040-m04
	I0311 20:38:09.684794   34716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:38:09.684838   34716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:38:09.699556   34716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36755
	I0311 20:38:09.699991   34716 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:38:09.700510   34716 main.go:141] libmachine: Using API Version  1
	I0311 20:38:09.700533   34716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:38:09.700871   34716 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:38:09.703177   34716 out.go:177] * Stopping node "ha-834040-m04"  ...
	I0311 20:38:09.704999   34716 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0311 20:38:09.705021   34716 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:38:09.705272   34716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0311 20:38:09.705313   34716 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:38:09.708159   34716 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:38:09.708630   34716 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:37:37 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:38:09.708654   34716 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:38:09.708847   34716 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:38:09.709031   34716 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:38:09.709225   34716 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:38:09.709354   34716 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	I0311 20:38:09.796557   34716 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0311 20:38:09.851584   34716 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0311 20:38:09.906274   34716 main.go:141] libmachine: Stopping "ha-834040-m04"...
	I0311 20:38:09.906318   34716 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:38:09.907928   34716 main.go:141] libmachine: (ha-834040-m04) Calling .Stop
	I0311 20:38:09.910992   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 0/120
	I0311 20:38:10.912807   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 1/120
	I0311 20:38:11.914141   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 2/120
	I0311 20:38:12.915623   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 3/120
	I0311 20:38:13.916920   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 4/120
	I0311 20:38:14.918271   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 5/120
	I0311 20:38:15.919583   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 6/120
	I0311 20:38:16.920766   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 7/120
	I0311 20:38:17.922992   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 8/120
	I0311 20:38:18.924238   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 9/120
	I0311 20:38:19.926544   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 10/120
	I0311 20:38:20.927946   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 11/120
	I0311 20:38:21.929418   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 12/120
	I0311 20:38:22.930696   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 13/120
	I0311 20:38:23.931943   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 14/120
	I0311 20:38:24.933732   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 15/120
	I0311 20:38:25.935225   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 16/120
	I0311 20:38:26.936480   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 17/120
	I0311 20:38:27.937792   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 18/120
	I0311 20:38:28.939995   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 19/120
	I0311 20:38:29.941998   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 20/120
	I0311 20:38:30.943255   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 21/120
	I0311 20:38:31.944868   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 22/120
	I0311 20:38:32.947104   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 23/120
	I0311 20:38:33.948329   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 24/120
	I0311 20:38:34.950062   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 25/120
	I0311 20:38:35.951262   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 26/120
	I0311 20:38:36.952758   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 27/120
	I0311 20:38:37.954074   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 28/120
	I0311 20:38:38.955483   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 29/120
	I0311 20:38:39.957748   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 30/120
	I0311 20:38:40.959091   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 31/120
	I0311 20:38:41.960405   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 32/120
	I0311 20:38:42.961756   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 33/120
	I0311 20:38:43.962976   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 34/120
	I0311 20:38:44.964837   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 35/120
	I0311 20:38:45.966462   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 36/120
	I0311 20:38:46.967780   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 37/120
	I0311 20:38:47.969520   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 38/120
	I0311 20:38:48.971227   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 39/120
	I0311 20:38:49.973066   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 40/120
	I0311 20:38:50.975218   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 41/120
	I0311 20:38:51.976345   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 42/120
	I0311 20:38:52.978126   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 43/120
	I0311 20:38:53.979522   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 44/120
	I0311 20:38:54.981462   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 45/120
	I0311 20:38:55.983165   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 46/120
	I0311 20:38:56.984624   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 47/120
	I0311 20:38:57.985661   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 48/120
	I0311 20:38:58.986978   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 49/120
	I0311 20:38:59.988770   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 50/120
	I0311 20:39:00.990203   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 51/120
	I0311 20:39:01.991630   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 52/120
	I0311 20:39:02.993207   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 53/120
	I0311 20:39:03.995185   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 54/120
	I0311 20:39:04.997028   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 55/120
	I0311 20:39:05.998350   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 56/120
	I0311 20:39:06.999677   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 57/120
	I0311 20:39:08.001157   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 58/120
	I0311 20:39:09.003159   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 59/120
	I0311 20:39:10.005080   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 60/120
	I0311 20:39:11.007175   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 61/120
	I0311 20:39:12.008400   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 62/120
	I0311 20:39:13.010360   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 63/120
	I0311 20:39:14.011660   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 64/120
	I0311 20:39:15.013380   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 65/120
	I0311 20:39:16.015256   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 66/120
	I0311 20:39:17.016933   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 67/120
	I0311 20:39:18.019191   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 68/120
	I0311 20:39:19.020499   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 69/120
	I0311 20:39:20.022446   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 70/120
	I0311 20:39:21.024619   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 71/120
	I0311 20:39:22.026126   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 72/120
	I0311 20:39:23.027724   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 73/120
	I0311 20:39:24.029048   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 74/120
	I0311 20:39:25.030796   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 75/120
	I0311 20:39:26.032095   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 76/120
	I0311 20:39:27.033297   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 77/120
	I0311 20:39:28.034711   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 78/120
	I0311 20:39:29.036343   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 79/120
	I0311 20:39:30.038564   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 80/120
	I0311 20:39:31.040381   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 81/120
	I0311 20:39:32.041806   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 82/120
	I0311 20:39:33.042997   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 83/120
	I0311 20:39:34.044374   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 84/120
	I0311 20:39:35.045849   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 85/120
	I0311 20:39:36.047020   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 86/120
	I0311 20:39:37.049028   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 87/120
	I0311 20:39:38.050282   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 88/120
	I0311 20:39:39.052441   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 89/120
	I0311 20:39:40.054126   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 90/120
	I0311 20:39:41.055184   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 91/120
	I0311 20:39:42.056838   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 92/120
	I0311 20:39:43.058192   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 93/120
	I0311 20:39:44.059480   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 94/120
	I0311 20:39:45.061566   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 95/120
	I0311 20:39:46.062881   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 96/120
	I0311 20:39:47.064486   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 97/120
	I0311 20:39:48.065707   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 98/120
	I0311 20:39:49.067315   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 99/120
	I0311 20:39:50.069376   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 100/120
	I0311 20:39:51.071146   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 101/120
	I0311 20:39:52.072527   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 102/120
	I0311 20:39:53.073855   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 103/120
	I0311 20:39:54.075270   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 104/120
	I0311 20:39:55.077150   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 105/120
	I0311 20:39:56.079459   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 106/120
	I0311 20:39:57.080686   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 107/120
	I0311 20:39:58.082086   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 108/120
	I0311 20:39:59.083219   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 109/120
	I0311 20:40:00.085112   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 110/120
	I0311 20:40:01.086416   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 111/120
	I0311 20:40:02.087627   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 112/120
	I0311 20:40:03.089014   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 113/120
	I0311 20:40:04.091162   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 114/120
	I0311 20:40:05.092711   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 115/120
	I0311 20:40:06.094194   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 116/120
	I0311 20:40:07.095636   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 117/120
	I0311 20:40:08.096962   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 118/120
	I0311 20:40:09.099260   34716 main.go:141] libmachine: (ha-834040-m04) Waiting for machine to stop 119/120
	I0311 20:40:10.100647   34716 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0311 20:40:10.100721   34716 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0311 20:40:10.102959   34716 out.go:177] 
	W0311 20:40:10.104307   34716 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0311 20:40:10.104321   34716 out.go:239] * 
	* 
	W0311 20:40:10.107224   34716 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 20:40:10.108777   34716 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-834040 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr: exit status 3 (19.050599903s)

                                                
                                                
-- stdout --
	ha-834040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-834040-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:40:10.166976   35035 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:40:10.167157   35035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:40:10.167174   35035 out.go:304] Setting ErrFile to fd 2...
	I0311 20:40:10.167184   35035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:40:10.167448   35035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:40:10.167657   35035 out.go:298] Setting JSON to false
	I0311 20:40:10.167692   35035 mustload.go:65] Loading cluster: ha-834040
	I0311 20:40:10.167805   35035 notify.go:220] Checking for updates...
	I0311 20:40:10.168220   35035 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:40:10.168238   35035 status.go:255] checking status of ha-834040 ...
	I0311 20:40:10.168684   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.168771   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.183288   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0311 20:40:10.183704   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.184227   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.184260   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.184598   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.184811   35035 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:40:10.201133   35035 status.go:330] ha-834040 host status = "Running" (err=<nil>)
	I0311 20:40:10.201153   35035 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:40:10.201439   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.201479   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.216207   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35045
	I0311 20:40:10.216639   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.217209   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.217239   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.217540   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.217731   35035 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:40:10.220788   35035 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:40:10.221155   35035 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:40:10.221178   35035 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:40:10.221305   35035 host.go:66] Checking if "ha-834040" exists ...
	I0311 20:40:10.221598   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.221649   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.235483   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43743
	I0311 20:40:10.235875   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.236353   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.236376   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.236649   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.236846   35035 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:40:10.237023   35035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:40:10.237049   35035 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:40:10.239816   35035 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:40:10.240275   35035 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:40:10.240332   35035 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:40:10.240463   35035 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:40:10.240621   35035 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:40:10.240774   35035 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:40:10.240922   35035 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:40:10.333364   35035 ssh_runner.go:195] Run: systemctl --version
	I0311 20:40:10.342372   35035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:40:10.364185   35035 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:40:10.364208   35035 api_server.go:166] Checking apiserver status ...
	I0311 20:40:10.364243   35035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:40:10.394216   35035 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5215/cgroup
	W0311 20:40:10.413973   35035 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:40:10.414025   35035 ssh_runner.go:195] Run: ls
	I0311 20:40:10.420284   35035 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:40:10.425972   35035 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:40:10.425997   35035 status.go:422] ha-834040 apiserver status = Running (err=<nil>)
	I0311 20:40:10.426008   35035 status.go:257] ha-834040 status: &{Name:ha-834040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:40:10.426038   35035 status.go:255] checking status of ha-834040-m02 ...
	I0311 20:40:10.426338   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.426368   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.442241   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0311 20:40:10.442667   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.443187   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.443211   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.443575   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.443760   35035 main.go:141] libmachine: (ha-834040-m02) Calling .GetState
	I0311 20:40:10.445340   35035 status.go:330] ha-834040-m02 host status = "Running" (err=<nil>)
	I0311 20:40:10.445356   35035 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:40:10.445630   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.445662   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.459221   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0311 20:40:10.459670   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.460120   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.460139   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.460412   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.460598   35035 main.go:141] libmachine: (ha-834040-m02) Calling .GetIP
	I0311 20:40:10.462836   35035 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:40:10.463249   35035 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:35:15 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:40:10.463273   35035 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:40:10.463434   35035 host.go:66] Checking if "ha-834040-m02" exists ...
	I0311 20:40:10.463869   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.463924   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.479036   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0311 20:40:10.479360   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.479736   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.479774   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.480135   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.480318   35035 main.go:141] libmachine: (ha-834040-m02) Calling .DriverName
	I0311 20:40:10.480486   35035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:40:10.480505   35035 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHHostname
	I0311 20:40:10.482963   35035 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:40:10.483327   35035 main.go:141] libmachine: (ha-834040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:4e:e5", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:35:15 +0000 UTC Type:0 Mac:52:54:00:82:4e:e5 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:ha-834040-m02 Clientid:01:52:54:00:82:4e:e5}
	I0311 20:40:10.483346   35035 main.go:141] libmachine: (ha-834040-m02) DBG | domain ha-834040-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:82:4e:e5 in network mk-ha-834040
	I0311 20:40:10.483491   35035 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHPort
	I0311 20:40:10.483649   35035 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHKeyPath
	I0311 20:40:10.483810   35035 main.go:141] libmachine: (ha-834040-m02) Calling .GetSSHUsername
	I0311 20:40:10.483937   35035 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m02/id_rsa Username:docker}
	I0311 20:40:10.565821   35035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:40:10.584477   35035 kubeconfig.go:125] found "ha-834040" server: "https://192.168.39.254:8443"
	I0311 20:40:10.584497   35035 api_server.go:166] Checking apiserver status ...
	I0311 20:40:10.584526   35035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:40:10.602014   35035 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup
	W0311 20:40:10.613608   35035 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:40:10.613650   35035 ssh_runner.go:195] Run: ls
	I0311 20:40:10.619014   35035 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0311 20:40:10.624868   35035 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0311 20:40:10.624888   35035 status.go:422] ha-834040-m02 apiserver status = Running (err=<nil>)
	I0311 20:40:10.624895   35035 status.go:257] ha-834040-m02 status: &{Name:ha-834040-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:40:10.624911   35035 status.go:255] checking status of ha-834040-m04 ...
	I0311 20:40:10.625209   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.625264   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.639826   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0311 20:40:10.640162   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.640608   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.640638   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.640995   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.641188   35035 main.go:141] libmachine: (ha-834040-m04) Calling .GetState
	I0311 20:40:10.642726   35035 status.go:330] ha-834040-m04 host status = "Running" (err=<nil>)
	I0311 20:40:10.642742   35035 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:40:10.643083   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.643124   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.656783   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I0311 20:40:10.657131   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.657567   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.657593   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.657944   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.658115   35035 main.go:141] libmachine: (ha-834040-m04) Calling .GetIP
	I0311 20:40:10.660578   35035 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:40:10.660977   35035 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:37:37 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:40:10.660998   35035 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:40:10.661133   35035 host.go:66] Checking if "ha-834040-m04" exists ...
	I0311 20:40:10.661519   35035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:40:10.661562   35035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:40:10.676991   35035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0311 20:40:10.677296   35035 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:40:10.677684   35035 main.go:141] libmachine: Using API Version  1
	I0311 20:40:10.677707   35035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:40:10.678007   35035 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:40:10.678173   35035 main.go:141] libmachine: (ha-834040-m04) Calling .DriverName
	I0311 20:40:10.678366   35035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:40:10.678389   35035 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHHostname
	I0311 20:40:10.680682   35035 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:40:10.681040   35035 main.go:141] libmachine: (ha-834040-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:19:4b", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:37:37 +0000 UTC Type:0 Mac:52:54:00:3e:19:4b Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-834040-m04 Clientid:01:52:54:00:3e:19:4b}
	I0311 20:40:10.681066   35035 main.go:141] libmachine: (ha-834040-m04) DBG | domain ha-834040-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:3e:19:4b in network mk-ha-834040
	I0311 20:40:10.681283   35035 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHPort
	I0311 20:40:10.681463   35035 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHKeyPath
	I0311 20:40:10.681633   35035 main.go:141] libmachine: (ha-834040-m04) Calling .GetSSHUsername
	I0311 20:40:10.681778   35035 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040-m04/id_rsa Username:docker}
	W0311 20:40:29.160974   35035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.44:22: connect: no route to host
	W0311 20:40:29.161085   35035 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.44:22: connect: no route to host
	E0311 20:40:29.161100   35035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.44:22: connect: no route to host
	I0311 20:40:29.161107   35035 status.go:257] ha-834040-m04 status: &{Name:ha-834040-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0311 20:40:29.161130   35035 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.44:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-834040 -n ha-834040
helpers_test.go:244: <<< TestMutliControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-834040 logs -n 25: (1.97802565s)
helpers_test.go:252: TestMutliControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-834040 ssh -n ha-834040-m02 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04:/home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m04 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp testdata/cp-test.txt                                                | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040:/home/docker/cp-test_ha-834040-m04_ha-834040.txt                       |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040 sudo cat                                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040.txt                                 |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m02:/home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m02 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m03:/home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n                                                                 | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | ha-834040-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-834040 ssh -n ha-834040-m03 sudo cat                                          | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC | 11 Mar 24 20:27 UTC |
	|         | /home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-834040 node stop m02 -v=7                                                     | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-834040 node start m02 -v=7                                                    | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-834040 -v=7                                                           | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-834040 -v=7                                                                | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-834040 --wait=true -v=7                                                    | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:33 UTC | 11 Mar 24 20:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-834040                                                                | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:37 UTC |                     |
	| node    | ha-834040 node delete m03 -v=7                                                   | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:37 UTC | 11 Mar 24 20:38 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-834040 stop -v=7                                                              | ha-834040 | jenkins | v1.32.0 | 11 Mar 24 20:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:33:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:33:28.226126   33198 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:33:28.226349   33198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:33:28.226357   33198 out.go:304] Setting ErrFile to fd 2...
	I0311 20:33:28.226361   33198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:33:28.226553   33198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:33:28.227065   33198 out.go:298] Setting JSON to false
	I0311 20:33:28.227905   33198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4557,"bootTime":1710184651,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:33:28.227964   33198 start.go:139] virtualization: kvm guest
	I0311 20:33:28.230585   33198 out.go:177] * [ha-834040] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:33:28.232124   33198 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:33:28.232163   33198 notify.go:220] Checking for updates...
	I0311 20:33:28.233861   33198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:33:28.235553   33198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:33:28.237206   33198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:33:28.238616   33198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:33:28.240071   33198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:33:28.241787   33198 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:33:28.241877   33198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:33:28.242309   33198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:33:28.242345   33198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:33:28.257426   33198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0311 20:33:28.257815   33198 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:33:28.258314   33198 main.go:141] libmachine: Using API Version  1
	I0311 20:33:28.258337   33198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:33:28.258697   33198 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:33:28.258846   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:33:28.292885   33198 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 20:33:28.294287   33198 start.go:297] selected driver: kvm2
	I0311 20:33:28.294303   33198 start.go:901] validating driver "kvm2" against &{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:33:28.294423   33198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:33:28.294717   33198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:33:28.294775   33198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 20:33:28.308830   33198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 20:33:28.309472   33198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:33:28.309501   33198 cni.go:84] Creating CNI manager for ""
	I0311 20:33:28.309507   33198 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0311 20:33:28.309551   33198 start.go:340] cluster config:
	{Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:33:28.309674   33198 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:33:28.311515   33198 out.go:177] * Starting "ha-834040" primary control-plane node in "ha-834040" cluster
	I0311 20:33:28.312806   33198 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:33:28.312831   33198 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 20:33:28.312837   33198 cache.go:56] Caching tarball of preloaded images
	I0311 20:33:28.312908   33198 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:33:28.312921   33198 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:33:28.313041   33198 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/config.json ...
	I0311 20:33:28.313220   33198 start.go:360] acquireMachinesLock for ha-834040: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:33:28.313255   33198 start.go:364] duration metric: took 19.892µs to acquireMachinesLock for "ha-834040"
	I0311 20:33:28.313268   33198 start.go:96] Skipping create...Using existing machine configuration
	I0311 20:33:28.313276   33198 fix.go:54] fixHost starting: 
	I0311 20:33:28.313506   33198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:33:28.313532   33198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:33:28.326605   33198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0311 20:33:28.327013   33198 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:33:28.327512   33198 main.go:141] libmachine: Using API Version  1
	I0311 20:33:28.327531   33198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:33:28.327802   33198 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:33:28.327982   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:33:28.328145   33198 main.go:141] libmachine: (ha-834040) Calling .GetState
	I0311 20:33:28.329537   33198 fix.go:112] recreateIfNeeded on ha-834040: state=Running err=<nil>
	W0311 20:33:28.329571   33198 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 20:33:28.331467   33198 out.go:177] * Updating the running kvm2 "ha-834040" VM ...
	I0311 20:33:28.332916   33198 machine.go:94] provisionDockerMachine start ...
	I0311 20:33:28.332942   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:33:28.333104   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.335506   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.335915   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.335938   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.336054   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:28.336194   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.336357   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.336486   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:28.336637   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:33:28.336893   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:33:28.336908   33198 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 20:33:28.450423   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040
	
	I0311 20:33:28.450452   33198 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:33:28.450663   33198 buildroot.go:166] provisioning hostname "ha-834040"
	I0311 20:33:28.450678   33198 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:33:28.450859   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.453321   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.453738   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.453764   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.453922   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:28.454102   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.454274   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.454397   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:28.454533   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:33:28.454722   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:33:28.454736   33198 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-834040 && echo "ha-834040" | sudo tee /etc/hostname
	I0311 20:33:28.585815   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-834040
	
	I0311 20:33:28.585860   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.588686   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.589092   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.589116   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.589377   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:28.589566   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.589773   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.589910   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:28.590048   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:33:28.590218   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:33:28.590239   33198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-834040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-834040/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-834040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:33:28.697743   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:33:28.697775   33198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:33:28.697800   33198 buildroot.go:174] setting up certificates
	I0311 20:33:28.697809   33198 provision.go:84] configureAuth start
	I0311 20:33:28.697817   33198 main.go:141] libmachine: (ha-834040) Calling .GetMachineName
	I0311 20:33:28.698163   33198 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:33:28.700459   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.700906   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.700933   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.701070   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.702846   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.703209   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.703229   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.703398   33198 provision.go:143] copyHostCerts
	I0311 20:33:28.703427   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:33:28.703469   33198 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:33:28.703481   33198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:33:28.703557   33198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:33:28.703660   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:33:28.703683   33198 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:33:28.703690   33198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:33:28.703730   33198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:33:28.703838   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:33:28.703864   33198 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:33:28.703870   33198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:33:28.703906   33198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:33:28.703970   33198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.ha-834040 san=[127.0.0.1 192.168.39.128 ha-834040 localhost minikube]
	I0311 20:33:28.852220   33198 provision.go:177] copyRemoteCerts
	I0311 20:33:28.852285   33198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:33:28.852312   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:28.854832   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.855243   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:28.855273   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:28.855478   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:28.855665   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:28.855834   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:28.855983   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:33:28.936281   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:33:28.936359   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0311 20:33:28.966996   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:33:28.967052   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 20:33:28.996023   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:33:28.996085   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:33:29.024302   33198 provision.go:87] duration metric: took 326.482478ms to configureAuth
	I0311 20:33:29.024326   33198 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:33:29.024523   33198 config.go:182] Loaded profile config "ha-834040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:33:29.024615   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:33:29.027075   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:29.027425   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:33:29.027450   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:33:29.027591   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:33:29.027763   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:29.027910   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:33:29.028040   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:33:29.028212   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:33:29.028368   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:33:29.028384   33198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:34:59.909468   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:34:59.909490   33198 machine.go:97] duration metric: took 1m31.576554147s to provisionDockerMachine
	I0311 20:34:59.909501   33198 start.go:293] postStartSetup for "ha-834040" (driver="kvm2")
	I0311 20:34:59.909511   33198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:34:59.909524   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:34:59.909801   33198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:34:59.909860   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:34:59.912858   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:34:59.913279   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:34:59.913304   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:34:59.913443   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:34:59.913639   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:34:59.913827   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:34:59.913965   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:34:59.996839   33198 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:35:00.002329   33198 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:35:00.002351   33198 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:35:00.002404   33198 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:35:00.002469   33198 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:35:00.002479   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:35:00.002554   33198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:35:00.016406   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:35:00.045260   33198 start.go:296] duration metric: took 135.744546ms for postStartSetup
	I0311 20:35:00.045304   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.045611   33198 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0311 20:35:00.045640   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:35:00.047965   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.048370   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.048396   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.048541   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:35:00.048723   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.048893   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:35:00.049048   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	W0311 20:35:00.131991   33198 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0311 20:35:00.132023   33198 fix.go:56] duration metric: took 1m31.818746443s for fixHost
	I0311 20:35:00.132055   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:35:00.134403   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.134823   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.134853   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.135032   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:35:00.135231   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.135406   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.135549   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:35:00.135705   33198 main.go:141] libmachine: Using SSH client type: native
	I0311 20:35:00.135869   33198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0311 20:35:00.135880   33198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:35:00.242337   33198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710189300.206863241
	
	I0311 20:35:00.242356   33198 fix.go:216] guest clock: 1710189300.206863241
	I0311 20:35:00.242363   33198 fix.go:229] Guest: 2024-03-11 20:35:00.206863241 +0000 UTC Remote: 2024-03-11 20:35:00.132031274 +0000 UTC m=+91.958740141 (delta=74.831967ms)
	I0311 20:35:00.242391   33198 fix.go:200] guest clock delta is within tolerance: 74.831967ms
	I0311 20:35:00.242397   33198 start.go:83] releasing machines lock for "ha-834040", held for 1m31.929132911s
	I0311 20:35:00.242415   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.242677   33198 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:35:00.245079   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.245482   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.245527   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.245641   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.246235   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.246410   33198 main.go:141] libmachine: (ha-834040) Calling .DriverName
	I0311 20:35:00.246497   33198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:35:00.246542   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:35:00.246639   33198 ssh_runner.go:195] Run: cat /version.json
	I0311 20:35:00.246665   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHHostname
	I0311 20:35:00.249177   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.249467   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.249561   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.249603   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.249706   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:35:00.249849   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:00.249858   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.249877   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:00.250031   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHPort
	I0311 20:35:00.250032   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:35:00.250220   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:35:00.250246   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHKeyPath
	I0311 20:35:00.250393   33198 main.go:141] libmachine: (ha-834040) Calling .GetSSHUsername
	I0311 20:35:00.250527   33198 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/ha-834040/id_rsa Username:docker}
	I0311 20:35:00.326624   33198 ssh_runner.go:195] Run: systemctl --version
	I0311 20:35:00.352392   33198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:35:00.522734   33198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 20:35:00.530063   33198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:35:00.530138   33198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:35:00.541331   33198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 20:35:00.541349   33198 start.go:494] detecting cgroup driver to use...
	I0311 20:35:00.541417   33198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:35:00.559256   33198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:35:00.574277   33198 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:35:00.574328   33198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:35:00.590177   33198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:35:00.605002   33198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:35:00.767373   33198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:35:00.927704   33198 docker.go:233] disabling docker service ...
	I0311 20:35:00.927758   33198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:35:00.947407   33198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:35:00.962590   33198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:35:01.115537   33198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:35:01.269146   33198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:35:01.284696   33198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:35:01.305768   33198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:35:01.305838   33198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:35:01.319388   33198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:35:01.319441   33198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:35:01.332232   33198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:35:01.344043   33198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:35:01.356143   33198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:35:01.368840   33198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:35:01.379804   33198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:35:01.390913   33198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:35:01.549351   33198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:35:01.923429   33198 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:35:01.923488   33198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:35:01.928835   33198 start.go:562] Will wait 60s for crictl version
	I0311 20:35:01.928891   33198 ssh_runner.go:195] Run: which crictl
	I0311 20:35:01.933350   33198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:35:01.987937   33198 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:35:01.988026   33198 ssh_runner.go:195] Run: crio --version
	I0311 20:35:02.019165   33198 ssh_runner.go:195] Run: crio --version
	I0311 20:35:02.052945   33198 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:35:02.054221   33198 main.go:141] libmachine: (ha-834040) Calling .GetIP
	I0311 20:35:02.056782   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:02.057164   33198 main.go:141] libmachine: (ha-834040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6f:e8", ip: ""} in network mk-ha-834040: {Iface:virbr1 ExpiryTime:2024-03-11 21:23:00 +0000 UTC Type:0 Mac:52:54:00:33:6f:e8 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-834040 Clientid:01:52:54:00:33:6f:e8}
	I0311 20:35:02.057186   33198 main.go:141] libmachine: (ha-834040) DBG | domain ha-834040 has defined IP address 192.168.39.128 and MAC address 52:54:00:33:6f:e8 in network mk-ha-834040
	I0311 20:35:02.057386   33198 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:35:02.062545   33198 kubeadm.go:877] updating cluster {Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 20:35:02.062686   33198 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:35:02.062732   33198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:35:02.115425   33198 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:35:02.115446   33198 crio.go:415] Images already preloaded, skipping extraction
	I0311 20:35:02.115486   33198 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:35:02.157541   33198 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:35:02.157563   33198 cache_images.go:84] Images are preloaded, skipping loading
	I0311 20:35:02.157573   33198 kubeadm.go:928] updating node { 192.168.39.128 8443 v1.28.4 crio true true} ...
	I0311 20:35:02.157696   33198 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-834040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:35:02.157778   33198 ssh_runner.go:195] Run: crio config
	I0311 20:35:02.207212   33198 cni.go:84] Creating CNI manager for ""
	I0311 20:35:02.207243   33198 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0311 20:35:02.207256   33198 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 20:35:02.207275   33198 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-834040 NodeName:ha-834040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 20:35:02.207435   33198 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-834040"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 20:35:02.207461   33198 kube-vip.go:101] generating kube-vip config ...
	I0311 20:35:02.207506   33198 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0311 20:35:02.207549   33198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:35:02.218800   33198 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 20:35:02.218867   33198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0311 20:35:02.229881   33198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0311 20:35:02.248776   33198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:35:02.267701   33198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0311 20:35:02.285978   33198 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0311 20:35:02.304380   33198 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0311 20:35:02.308680   33198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:35:02.455235   33198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:35:02.471506   33198 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040 for IP: 192.168.39.128
	I0311 20:35:02.471529   33198 certs.go:194] generating shared ca certs ...
	I0311 20:35:02.471548   33198 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:35:02.471727   33198 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:35:02.471793   33198 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:35:02.471807   33198 certs.go:256] generating profile certs ...
	I0311 20:35:02.471896   33198 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/client.key
	I0311 20:35:02.471930   33198 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.8b7c4a26
	I0311 20:35:02.471947   33198 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.8b7c4a26 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128 192.168.39.101 192.168.39.40 192.168.39.254]
	I0311 20:35:02.632897   33198 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.8b7c4a26 ...
	I0311 20:35:02.632923   33198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.8b7c4a26: {Name:mk8ed2f3c0d8195405e2faef9275c0bb79ff2ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:35:02.633080   33198 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.8b7c4a26 ...
	I0311 20:35:02.633091   33198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.8b7c4a26: {Name:mkfe4e256c37c321648816748aaee4cf776ec925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:35:02.633160   33198 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt.8b7c4a26 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt
	I0311 20:35:02.633304   33198 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key.8b7c4a26 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key
	I0311 20:35:02.633427   33198 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key
	I0311 20:35:02.633442   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:35:02.633453   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:35:02.633464   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:35:02.633474   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:35:02.633483   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:35:02.633492   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:35:02.633502   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:35:02.633512   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:35:02.633557   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:35:02.633583   33198 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:35:02.633592   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:35:02.633615   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:35:02.633664   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:35:02.633689   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:35:02.633724   33198 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:35:02.633748   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:35:02.633761   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:35:02.633774   33198 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:35:02.634276   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:35:02.662882   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:35:02.689474   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:35:02.715652   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:35:02.743547   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0311 20:35:02.769956   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 20:35:02.797999   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:35:02.825764   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/ha-834040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:35:02.852129   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:35:02.877568   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:35:02.903812   33198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:35:02.929369   33198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 20:35:02.948373   33198 ssh_runner.go:195] Run: openssl version
	I0311 20:35:02.954891   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:35:02.967173   33198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:35:02.972176   33198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:35:02.972229   33198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:35:02.978615   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:35:02.994538   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:35:03.006744   33198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:35:03.011794   33198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:35:03.011847   33198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:35:03.018142   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:35:03.029227   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:35:03.041607   33198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:35:03.047089   33198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:35:03.047138   33198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:35:03.053523   33198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:35:03.065109   33198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:35:03.070392   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 20:35:03.076658   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 20:35:03.082922   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 20:35:03.089032   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 20:35:03.095166   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 20:35:03.101368   33198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 20:35:03.108167   33198 kubeadm.go:391] StartCluster: {Name:ha-834040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-834040 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.101 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.40 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:35:03.108268   33198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 20:35:03.108308   33198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 20:35:03.156039   33198 cri.go:89] found id: "6bdfe67eb7a848f6a0d969a29c20ba9575264d3254289f3e69d76d2e256f0a23"
	I0311 20:35:03.156063   33198 cri.go:89] found id: "c5745481a2bd303d21ee4b5d13b5667eba96af6aba1c646e8cac99a1390a8572"
	I0311 20:35:03.156068   33198 cri.go:89] found id: "b1a7df27a0f7c49fa96b4dfc438c4814e0c224f8f2f6bba553866403916ca5c1"
	I0311 20:35:03.156074   33198 cri.go:89] found id: "b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f"
	I0311 20:35:03.156078   33198 cri.go:89] found id: "afc1d1d2e164dd343671afbbbe3ffc3de1a7f9e87e3fb6c2094eed1725c62105"
	I0311 20:35:03.156084   33198 cri.go:89] found id: "48ff55cc7dd7ce86b2ec6d65b88532b25bd348edd26139398dbf126195687f15"
	I0311 20:35:03.156088   33198 cri.go:89] found id: "7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450"
	I0311 20:35:03.156092   33198 cri.go:89] found id: "6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc"
	I0311 20:35:03.156096   33198 cri.go:89] found id: "ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25"
	I0311 20:35:03.156103   33198 cri.go:89] found id: "4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1"
	I0311 20:35:03.156107   33198 cri.go:89] found id: "abfa6c7eaf9de4ab3088d26a5835e9b00f125cd279c3b56757edcb48e368cbf8"
	I0311 20:35:03.156111   33198 cri.go:89] found id: "4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861"
	I0311 20:35:03.156114   33198 cri.go:89] found id: "d2c6fc6f4ca02e29aec794ea48b682294a80ffbea548013775fff8dfd449a944"
	I0311 20:35:03.156118   33198 cri.go:89] found id: ""
	I0311 20:35:03.156170   33198 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.816247781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7231d385-7a7f-4726-bd04-31401e33f0b3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.817256548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c7f58f0ecba2abb4331fff9dd84f1caaada79b61f3e7d55d8f0d7306667734,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710189437599719439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710189382610029136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710189350602547776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710189349597577731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710189347596792761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3832accc496d3e6679bd39117f2f8e7c441c6a002c9e64c0ec10c3e20a2e2a2a,PodSandboxId:c2780ed8082241d2d00f6529cc7d2c01776909d9f84c2c0e4731e4006bc0669b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710189338938216376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263,PodSandboxId:9db00ddf870f0dc290aff114bb00eb43547e46a8d8b29ae944a1117328fce69e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710189306578156100,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:b60c1c2efa76c17a9d1751e8bb3b16ca171899c4bf68a80acf6925f84e1a7c55,PodSandboxId:a0d58ca9155034374fd9f12edbc5e58f99162c267563c3bb25ea5a7c7e7a2772,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710189306053992091,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8d445
c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da,PodSandboxId:feacd92c56223e2e8bf7543d1d93913b6ca8e364e24f66932eec768f2c500882,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189306008448313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d,PodSandboxId:5f09ca01a653a1f54a6736c0ec543c45b9c4b0b69395e09fbde14c7976d5970b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710189305860476552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710189305790540189,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802,PodSandboxId:48d5e3492f37bdc2894837aa00c8d95665b2b817628e3ebc846b9e22d9a772bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710189305777401386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44,PodSandboxId:31077b778010bae070fdaba2a7e62491855b23640273a208df747f420acc6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189305663521866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710189305534759912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map
[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710189305490975196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kube
rnetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710189114601156633,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710188810909713860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626343540719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626308848252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710188621618774474,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Crea
tedAt:1710188600842160862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710188600790791703,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7231d385-7a7f-4726-bd04-31401e33f0b3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.871015443Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1423a09-b437-488b-90a8-d1ca62517720 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.871212852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1423a09-b437-488b-90a8-d1ca62517720 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.872591330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=256ca730-a073-4bbf-95f3-e9964b12c8f9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.873277102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189629873247081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=256ca730-a073-4bbf-95f3-e9964b12c8f9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.873960429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2557e49f-d19b-4fb8-9a4e-7182e9ac5fcd name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.874034680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2557e49f-d19b-4fb8-9a4e-7182e9ac5fcd name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.874731952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c7f58f0ecba2abb4331fff9dd84f1caaada79b61f3e7d55d8f0d7306667734,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710189437599719439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710189382610029136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710189350602547776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710189349597577731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710189347596792761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3832accc496d3e6679bd39117f2f8e7c441c6a002c9e64c0ec10c3e20a2e2a2a,PodSandboxId:c2780ed8082241d2d00f6529cc7d2c01776909d9f84c2c0e4731e4006bc0669b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710189338938216376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263,PodSandboxId:9db00ddf870f0dc290aff114bb00eb43547e46a8d8b29ae944a1117328fce69e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710189306578156100,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:b60c1c2efa76c17a9d1751e8bb3b16ca171899c4bf68a80acf6925f84e1a7c55,PodSandboxId:a0d58ca9155034374fd9f12edbc5e58f99162c267563c3bb25ea5a7c7e7a2772,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710189306053992091,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8d445
c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da,PodSandboxId:feacd92c56223e2e8bf7543d1d93913b6ca8e364e24f66932eec768f2c500882,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189306008448313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d,PodSandboxId:5f09ca01a653a1f54a6736c0ec543c45b9c4b0b69395e09fbde14c7976d5970b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710189305860476552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710189305790540189,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802,PodSandboxId:48d5e3492f37bdc2894837aa00c8d95665b2b817628e3ebc846b9e22d9a772bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710189305777401386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44,PodSandboxId:31077b778010bae070fdaba2a7e62491855b23640273a208df747f420acc6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189305663521866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710189305534759912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map
[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710189305490975196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kube
rnetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710189114601156633,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710188810909713860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626343540719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626308848252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710188621618774474,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Crea
tedAt:1710188600842160862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710188600790791703,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2557e49f-d19b-4fb8-9a4e-7182e9ac5fcd name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.896777344Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1695e514-8b08-4362-aabd-5a3b0e9582cb name=/runtime.v1.ImageService/ListImages
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.897457499Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499 registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb],Size_:127226832,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232],Size_:123261750,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{
Id:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32],Size_:61551410,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,RepoTags:[registry.k8s.io/kube-proxy:v1.28.4],RepoDigests:[registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532],Size_:74749335,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 re
gistry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709
a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,RepoTags:[docker.io/kindest/kindnetd:v20230809-80a64d96],RepoDigests:[docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4],Size_:65258016,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.7.1],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a ghcr.io/kube-vip/kube-vip@sh
a256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016],Size_:49275355,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,RepoTags:[docker.io/kindest/kindnetd:v20240202-8f1494ea],RepoDigests:[docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988 docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac],Size_:65291810,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=1695e514-8b08-4362-aabd-5a3b0e9582cb name=/runtime.v1.Ima
geService/ListImages
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.957469584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1384534d-6973-4b9f-bd5d-93c1fb3aa75a name=/runtime.v1.RuntimeService/Version
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.957598697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1384534d-6973-4b9f-bd5d-93c1fb3aa75a name=/runtime.v1.RuntimeService/Version
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.959311915Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64600d41-153e-4944-ab23-623f2f8d2df2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.959999921Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189629959968420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64600d41-153e-4944-ab23-623f2f8d2df2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.960934023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9e8aa26-8ced-417e-b30b-d78845e5c212 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.961013639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9e8aa26-8ced-417e-b30b-d78845e5c212 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:29 ha-834040 crio[3934]: time="2024-03-11 20:40:29.961715563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c7f58f0ecba2abb4331fff9dd84f1caaada79b61f3e7d55d8f0d7306667734,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710189437599719439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710189382610029136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710189350602547776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710189349597577731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710189347596792761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3832accc496d3e6679bd39117f2f8e7c441c6a002c9e64c0ec10c3e20a2e2a2a,PodSandboxId:c2780ed8082241d2d00f6529cc7d2c01776909d9f84c2c0e4731e4006bc0669b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710189338938216376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263,PodSandboxId:9db00ddf870f0dc290aff114bb00eb43547e46a8d8b29ae944a1117328fce69e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710189306578156100,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:b60c1c2efa76c17a9d1751e8bb3b16ca171899c4bf68a80acf6925f84e1a7c55,PodSandboxId:a0d58ca9155034374fd9f12edbc5e58f99162c267563c3bb25ea5a7c7e7a2772,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710189306053992091,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8d445
c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da,PodSandboxId:feacd92c56223e2e8bf7543d1d93913b6ca8e364e24f66932eec768f2c500882,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189306008448313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d,PodSandboxId:5f09ca01a653a1f54a6736c0ec543c45b9c4b0b69395e09fbde14c7976d5970b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710189305860476552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710189305790540189,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802,PodSandboxId:48d5e3492f37bdc2894837aa00c8d95665b2b817628e3ebc846b9e22d9a772bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710189305777401386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44,PodSandboxId:31077b778010bae070fdaba2a7e62491855b23640273a208df747f420acc6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189305663521866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710189305534759912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map
[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710189305490975196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kube
rnetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710189114601156633,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710188810909713860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626343540719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626308848252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710188621618774474,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Crea
tedAt:1710188600842160862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710188600790791703,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9e8aa26-8ced-417e-b30b-d78845e5c212 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:30 ha-834040 crio[3934]: time="2024-03-11 20:40:30.012922612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d37921ce-455a-4d9f-a036-202219acb473 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:40:30 ha-834040 crio[3934]: time="2024-03-11 20:40:30.013051041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d37921ce-455a-4d9f-a036-202219acb473 name=/runtime.v1.RuntimeService/Version
	Mar 11 20:40:30 ha-834040 crio[3934]: time="2024-03-11 20:40:30.015415049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71219b15-c965-4925-a6ea-9cd8c50b7ebb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:40:30 ha-834040 crio[3934]: time="2024-03-11 20:40:30.016653808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710189630016620580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71219b15-c965-4925-a6ea-9cd8c50b7ebb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 20:40:30 ha-834040 crio[3934]: time="2024-03-11 20:40:30.017721797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85bf17cc-0c6c-40c6-90c5-fa0c791ebd01 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:30 ha-834040 crio[3934]: time="2024-03-11 20:40:30.017833923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85bf17cc-0c6c-40c6-90c5-fa0c791ebd01 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 20:40:30 ha-834040 crio[3934]: time="2024-03-11 20:40:30.022389945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6c7f58f0ecba2abb4331fff9dd84f1caaada79b61f3e7d55d8f0d7306667734,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710189437599719439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710189382610029136,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7,PodSandboxId:6ef704c8e70a9b57900a2f7b4ee91e02a93d15fcd82f1d1c7d241d195febc4b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710189350602547776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbc64228-86a0-4e0c-9eef-f4644439ca13,},Annotations:map[string]string{io.kubernetes.container.hash: b7ec0905,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710189349597577731,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kubernetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710189347596792761,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3832accc496d3e6679bd39117f2f8e7c441c6a002c9e64c0ec10c3e20a2e2a2a,PodSandboxId:c2780ed8082241d2d00f6529cc7d2c01776909d9f84c2c0e4731e4006bc0669b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710189338938216376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263,PodSandboxId:9db00ddf870f0dc290aff114bb00eb43547e46a8d8b29ae944a1117328fce69e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710189306578156100,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:b60c1c2efa76c17a9d1751e8bb3b16ca171899c4bf68a80acf6925f84e1a7c55,PodSandboxId:a0d58ca9155034374fd9f12edbc5e58f99162c267563c3bb25ea5a7c7e7a2772,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710189306053992091,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8d445
c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da,PodSandboxId:feacd92c56223e2e8bf7543d1d93913b6ca8e364e24f66932eec768f2c500882,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189306008448313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d,PodSandboxId:5f09ca01a653a1f54a6736c0ec543c45b9c4b0b69395e09fbde14c7976d5970b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710189305860476552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db,PodSandboxId:bfa23d82d4c2e910fbd316826baee92fc3f2ab5cbbbe4597db5a8ec865977d02,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710189305790540189,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bw656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb13135-e5b5-46df-922e-5ebfb444c219,},Annotations:map[string]string{io.kubernetes.container.hash: 17139a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802,PodSandboxId:48d5e3492f37bdc2894837aa00c8d95665b2b817628e3ebc846b9e22d9a772bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710189305777401386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44,PodSandboxId:31077b778010bae070fdaba2a7e62491855b23640273a208df747f420acc6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710189305663521866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1,PodSandboxId:4fc559c46ae672d8df0e1a5c296f61ad956dfd45bcb84408807b0b75792b9faa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710189305534759912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ff0d61e78d4c7e81a3739c4cfca961,},Annotations:map
[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d,PodSandboxId:85b6fb2e7a9feacda278b3e1520b2aa53d9ee1161274a3803c594f682fae0771,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710189305490975196,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 335a4d4972ebbbc7fad3e18de1f91d62,},Annotations:map[string]string{io.kube
rnetes.container.hash: a2ec0d2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96396c0e35ce209cca3d72aa43430faa3908fc9287ff74cc60440fdf88f040f,PodSandboxId:dcb18e5f12de13716a5e3e452a9f6a7da9d1134f9c0463a4812305d04e0712e0,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710189114601156633,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1850c9be0d7c3186930048a411f0848e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kuber
netes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e9f2d7df5c5a3fb4e0936d25db5ef7b888b253a84729b2ea746bd52240868,PodSandboxId:417164b9b0cb4cf7c5f35870da42ac37bfa937bc7a249049062b56539889d92f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710188810909713860,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-d62cw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea39821f-426d-43bf-a955-77e3a308239e,},Annotations:map[string]string{io.kubernetes.container.hash: aa95a7ac,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450,PodSandboxId:4860ab9172968acccd2feec407548c9a616d7d05c17bd8eeb9ea460a47914a75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626343540719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kq47h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2a70553-206f-4d11-b32f-01ddd30db8ec,},Annotations:map[string]string{io.kubernetes.container.hash: d2e4795b,io.kubernetes.container.ports: [{\"name\"
:\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc,PodSandboxId:94384bd2f8c9834ea60b26f58b54a3f8ded040d4492a1b72a842dfa78a2e1a4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710188626308848252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-d6f2x,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: ddc7bef4-f6c5-442f-8149-e52a1822986d,},Annotations:map[string]string{io.kubernetes.container.hash: 56234176,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25,PodSandboxId:a9e018e6df6e7498b9eb7fe9399edc330adf905fe0031d6719252a734b138b98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710188621618774474,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8svv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a7973ca-9a35-4190-8845-cc685619b093,},Annotations:map[string]string{io.kubernetes.container.hash: 211c033d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1,PodSandboxId:3e8bbccfbf3880b57aac53f6890d21e792e8c5c56e597fed1e47eb0293759380,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Crea
tedAt:1710188600842160862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8574caa0e5c64be17c44650f230da671,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7a430c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861,PodSandboxId:85d4eab358f29e7748807f209209f76c0009f9f3824ae2e5dde01603232fae9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710188600790791703,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-834040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acfbe685e85c9978570c826b71def2d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85bf17cc-0c6c-40c6-90c5-fa0c791ebd01 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d6c7f58f0ecba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   6ef704c8e70a9       storage-provisioner
	5a4fa8160f6f5       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   bfa23d82d4c2e       kindnet-bw656
	a20030032ebd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       4                   6ef704c8e70a9       storage-provisioner
	4d12665eb117c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            3                   85b6fb2e7a9fe       kube-apiserver-ha-834040
	a61a17645171f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   2                   4fc559c46ae67       kube-controller-manager-ha-834040
	3832accc496d3       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   c2780ed808224       busybox-5b5d89c9d6-d62cw
	f9876035a6710       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago       Running             kube-proxy                1                   9db00ddf870f0       kube-proxy-h8svv
	b60c1c2efa76c       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  3                   a0d58ca915503       kube-vip-ha-834040
	da8d445c7477e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   feacd92c56223       coredns-5dd5756b68-d6f2x
	295775061cd27       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago       Running             kube-scheduler            1                   5f09ca01a653a       kube-scheduler-ha-834040
	eaefaf7c41e62       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   bfa23d82d4c2e       kindnet-bw656
	107118ba00c2d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago       Running             etcd                      1                   48d5e3492f37b       etcd-ha-834040
	a6b7ee8bc2fdc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   31077b778010b       coredns-5dd5756b68-kq47h
	f1a69a51bad87       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Exited              kube-controller-manager   1                   4fc559c46ae67       kube-controller-manager-ha-834040
	9f072720516b7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Exited              kube-apiserver            2                   85b6fb2e7a9fe       kube-apiserver-ha-834040
	b96396c0e35ce       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      8 minutes ago       Exited              kube-vip                  2                   dcb18e5f12de1       kube-vip-ha-834040
	251e9f2d7df5c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   417164b9b0cb4       busybox-5b5d89c9d6-d62cw
	7be345e0f22ca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   4860ab9172968       coredns-5dd5756b68-kq47h
	6926d89f93fa7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   94384bd2f8c98       coredns-5dd5756b68-d6f2x
	ab5ff27a1d4cb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago      Exited              kube-proxy                0                   a9e018e6df6e7       kube-proxy-h8svv
	4395af23a1752       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      17 minutes ago      Exited              etcd                      0                   3e8bbccfbf388       etcd-ha-834040
	4b273e6fedf1a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      17 minutes ago      Exited              kube-scheduler            0                   85d4eab358f29       kube-scheduler-ha-834040
	
	
	==> coredns [6926d89f93fa70db4c771911c371482cadbf6469466a9bb57b4ecea09e9db6bc] <==
	[INFO] 10.244.0.4:34351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182428s
	[INFO] 10.244.1.2:54939 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000278877s
	[INFO] 10.244.1.2:37033 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194177s
	[INFO] 10.244.1.2:37510 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000190608s
	[INFO] 10.244.2.2:41536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108104s
	[INFO] 10.244.2.2:41561 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122082s
	[INFO] 10.244.0.4:42660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221566s
	[INFO] 10.244.0.4:53159 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000188136s
	[INFO] 10.244.0.4:41046 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100215s
	[INFO] 10.244.0.4:50387 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176539s
	[INFO] 10.244.1.2:54773 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120996s
	[INFO] 10.244.1.2:51952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119653s
	[INFO] 10.244.2.2:59116 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134078s
	[INFO] 10.244.2.2:47917 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128001s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1850&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1808&timeout=8m19s&timeoutSeconds=499&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1811&timeout=8m53s&timeoutSeconds=533&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7be345e0f22ca6c2302b326f6664a03f79ac52ab08fa5e3c81729249aa00f450] <==
	[INFO] 10.244.0.4:58455 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118841s
	[INFO] 10.244.0.4:49345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003481053s
	[INFO] 10.244.0.4:56716 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187984s
	[INFO] 10.244.0.4:35412 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160258s
	[INFO] 10.244.1.2:56957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150599s
	[INFO] 10.244.1.2:53790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450755s
	[INFO] 10.244.1.2:53927 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207107s
	[INFO] 10.244.2.2:55011 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744357s
	[INFO] 10.244.2.2:59931 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000316475s
	[INFO] 10.244.2.2:52694 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184762s
	[INFO] 10.244.2.2:51472 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080603s
	[INFO] 10.244.0.4:33893 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185444s
	[INFO] 10.244.0.4:54135 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072181s
	[INFO] 10.244.1.2:36921 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189721s
	[INFO] 10.244.2.2:60407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015337s
	[INFO] 10.244.2.2:45057 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177157s
	[INFO] 10.244.1.2:52652 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273969s
	[INFO] 10.244.1.2:41042 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160192s
	[INFO] 10.244.2.2:55743 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233222s
	[INFO] 10.244.2.2:43090 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000228333s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a6b7ee8bc2fdc38b38cf39f7d4cb9080e58593b4e35407bf28ba440d3a7aae44] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48351 - 26104 "HINFO IN 7964281783160883336.3331880714538953204. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009735234s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37560->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [da8d445c7477e86f69595642d02430b9dbe61c4ecbff89353b7edca7c7bd72da] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53638 - 4493 "HINFO IN 7144604500221555542.3321365182851520079. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008485411s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-834040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T20_23_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:40:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:35:53 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:35:53 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:35:53 +0000   Mon, 11 Mar 2024 20:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:35:53 +0000   Mon, 11 Mar 2024 20:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-834040
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6cb0aa00d5a4d388da50e20e0a9ccef
	  System UUID:                f6cb0aa0-0d5a-4d38-8da5-0e20e0a9ccef
	  Boot ID:                    47b6723c-3999-42a9-a19b-9f1c67aaecb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-d62cw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-d6f2x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-5dd5756b68-kq47h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-834040                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-bw656                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-834040             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-834040    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-h8svv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-834040             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-834040                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 16m    kube-proxy       
	  Normal   Starting                 4m38s  kube-proxy       
	  Normal   NodeHasSufficientPID     17m    kubelet          Node ha-834040 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  17m    kubelet          Node ha-834040 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m    kubelet          Node ha-834040 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  17m    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m    node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal   NodeReady                16m    kubelet          Node ha-834040 status is now: NodeReady
	  Normal   RegisteredNode           15m    node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal   RegisteredNode           14m    node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Warning  ContainerGCFailed        6m3s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m28s  node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal   RegisteredNode           4m26s  node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	  Normal   RegisteredNode           3m15s  node-controller  Node ha-834040 event: Registered Node ha-834040 in Controller
	
	
	Name:               ha-834040-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_24_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:24:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:40:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:36:34 +0000   Mon, 11 Mar 2024 20:35:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:36:34 +0000   Mon, 11 Mar 2024 20:35:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:36:34 +0000   Mon, 11 Mar 2024 20:35:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:36:34 +0000   Mon, 11 Mar 2024 20:35:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    ha-834040-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d932b403e92c478480bfc9080f018c7a
	  System UUID:                d932b403-e92c-4784-80bf-c9080f018c7a
	  Boot ID:                    ea703ef6-2ef0-497e-8b2c-6615b5191cee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-h9jx5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-834040-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-rqcq6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-834040-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-834040-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-dsjx4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-834040-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-834040-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 15m                  kube-proxy       
	  Normal  Starting                 4m11s                kube-proxy       
	  Normal  RegisteredNode           15m                  node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode           15m                  node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode           14m                  node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  NodeNotReady             11m                  node-controller  Node ha-834040-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-834040-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-834040-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-834040-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m28s                node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode           4m26s                node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	  Normal  RegisteredNode           3m15s                node-controller  Node ha-834040-m02 event: Registered Node ha-834040-m02 in Controller
	
	
	Name:               ha-834040-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-834040-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=ha-834040
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_27_30_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:27:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-834040-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:38:02 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 11 Mar 2024 20:37:42 +0000   Mon, 11 Mar 2024 20:38:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 11 Mar 2024 20:37:42 +0000   Mon, 11 Mar 2024 20:38:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 11 Mar 2024 20:37:42 +0000   Mon, 11 Mar 2024 20:38:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 11 Mar 2024 20:37:42 +0000   Mon, 11 Mar 2024 20:38:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-834040-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 01d975a4d97b45958b00e8cebd68bf34
	  System UUID:                01d975a4-d97b-4595-8b00-e8cebd68bf34
	  Boot ID:                    b8f29019-7e0c-455d-b088-b47ed3621612
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cxkh6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-gdbjb               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-wc99r            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node ha-834040-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node ha-834040-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node ha-834040-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-834040-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   RegisteredNode           4m26s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   NodeNotReady             3m48s                  node-controller  Node ha-834040-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-834040-m04 event: Registered Node ha-834040-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-834040-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-834040-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-834040-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-834040-m04 has been rebooted, boot id: b8f29019-7e0c-455d-b088-b47ed3621612
	  Normal   NodeReady                2m48s                  kubelet          Node ha-834040-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-834040-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.744921] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.061444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067061] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.157638] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.161215] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.262542] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +5.181266] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.062600] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.584713] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.482512] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.376234] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.096131] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.894025] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.119032] kauditd_printk_skb: 58 callbacks suppressed
	[Mar11 20:24] kauditd_printk_skb: 6 callbacks suppressed
	[Mar11 20:35] systemd-fstab-generator[3840]: Ignoring "noauto" option for root device
	[  +0.158347] systemd-fstab-generator[3852]: Ignoring "noauto" option for root device
	[  +0.196962] systemd-fstab-generator[3866]: Ignoring "noauto" option for root device
	[  +0.150632] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[  +0.268673] systemd-fstab-generator[3902]: Ignoring "noauto" option for root device
	[  +0.922138] systemd-fstab-generator[4024]: Ignoring "noauto" option for root device
	[  +3.522109] kauditd_printk_skb: 175 callbacks suppressed
	[ +21.790982] kauditd_printk_skb: 41 callbacks suppressed
	[ +25.828492] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [107118ba00c2d09428d6fb98ab4898f7fdeab599261beefaf53f6d20b8a12802] <==
	{"level":"info","ts":"2024-03-11T20:37:47.378058Z","caller":"traceutil/trace.go:171","msg":"trace[775851496] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2508; }","duration":"117.01008ms","start":"2024-03-11T20:37:47.261022Z","end":"2024-03-11T20:37:47.378032Z","steps":["trace[775851496] 'range keys from in-memory index tree'  (duration: 115.415427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:37:47.377999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.016614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2895"}
	{"level":"info","ts":"2024-03-11T20:37:47.378338Z","caller":"traceutil/trace.go:171","msg":"trace[2049817232] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:2508; }","duration":"218.347978ms","start":"2024-03-11T20:37:47.159977Z","end":"2024-03-11T20:37:47.378325Z","steps":["trace[2049817232] 'range keys from in-memory index tree'  (duration: 216.668836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:37:47.378411Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.563254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-03-11T20:37:47.378479Z","caller":"traceutil/trace.go:171","msg":"trace[30297874] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2508; }","duration":"128.633338ms","start":"2024-03-11T20:37:47.249836Z","end":"2024-03-11T20:37:47.37847Z","steps":["trace[30297874] 'range keys from in-memory index tree'  (duration: 126.964454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:37:47.377952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.560527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-wc99r\" ","response":"range_response_count:1 size:4429"}
	{"level":"info","ts":"2024-03-11T20:37:47.378635Z","caller":"traceutil/trace.go:171","msg":"trace[1801264300] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-wc99r; range_end:; response_count:1; response_revision:2508; }","duration":"143.255886ms","start":"2024-03-11T20:37:47.23537Z","end":"2024-03-11T20:37:47.378626Z","steps":["trace[1801264300] 'range keys from in-memory index tree'  (duration: 141.291995ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:37:55.834158Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.40:47608","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-11T20:37:55.848267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 switched to configuration voters=(5314053736747350461 18037291470719772950)"}
	{"level":"info","ts":"2024-03-11T20:37:55.848412Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b64da5b92548cbb8","local-member-id":"fa515506e66f6916","removed-remote-peer-id":"7f2e3f2197a91816","removed-remote-peer-urls":["https://192.168.39.40:2380"]}
	{"level":"info","ts":"2024-03-11T20:37:55.848507Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"warn","ts":"2024-03-11T20:37:55.849184Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:37:55.849254Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"warn","ts":"2024-03-11T20:37:55.849858Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:37:55.849918Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:37:55.85032Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"warn","ts":"2024-03-11T20:37:55.850598Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816","error":"context canceled"}
	{"level":"warn","ts":"2024-03-11T20:37:55.850881Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"7f2e3f2197a91816","error":"failed to read 7f2e3f2197a91816 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-11T20:37:55.851662Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"warn","ts":"2024-03-11T20:37:55.85594Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816","error":"context canceled"}
	{"level":"info","ts":"2024-03-11T20:37:55.858174Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:37:55.858245Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:37:55.858286Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"fa515506e66f6916","removed-remote-peer-id":"7f2e3f2197a91816"}
	{"level":"warn","ts":"2024-03-11T20:37:55.866312Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fa515506e66f6916","remote-peer-id-stream-handler":"fa515506e66f6916","remote-peer-id-from":"7f2e3f2197a91816"}
	{"level":"warn","ts":"2024-03-11T20:37:55.876208Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fa515506e66f6916","remote-peer-id-stream-handler":"fa515506e66f6916","remote-peer-id-from":"7f2e3f2197a91816"}
	
	
	==> etcd [4395af23a1752ec5439511ec9f2d1777205e2477bbf64c9d71892f2ac95b0cc1] <==
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/11 20:33:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-11T20:33:29.218634Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.128:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:33:29.218686Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.128:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-11T20:33:29.218746Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fa515506e66f6916","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-11T20:33:29.218905Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.218958Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219019Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219219Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219301Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219361Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.219394Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"49bf4fb7f029b9bd"}
	{"level":"info","ts":"2024-03-11T20:33:29.21942Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219448Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219507Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219593Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219648Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.219697Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fa515506e66f6916","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.21973Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7f2e3f2197a91816"}
	{"level":"info","ts":"2024-03-11T20:33:29.222501Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2024-03-11T20:33:29.222652Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2024-03-11T20:33:29.222702Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-834040","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.128:2380"],"advertise-client-urls":["https://192.168.39.128:2379"]}
	
	
	==> kernel <==
	 20:40:30 up 17 min,  0 users,  load average: 0.18, 0.30, 0.27
	Linux ha-834040 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5a4fa8160f6f5215b914701525d711241bb4d574dd1f1c698301b206fc545ab5] <==
	I0311 20:39:44.182930       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:39:54.199754       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:39:54.199881       1 main.go:227] handling current node
	I0311 20:39:54.199891       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:39:54.199897       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:39:54.200312       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:39:54.200421       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:40:04.206837       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:40:04.206881       1 main.go:227] handling current node
	I0311 20:40:04.206906       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:40:04.206917       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:40:04.207133       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:40:04.207168       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:40:14.223062       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:40:14.223164       1 main.go:227] handling current node
	I0311 20:40:14.223194       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:40:14.223201       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:40:14.223350       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:40:14.223383       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	I0311 20:40:24.230968       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0311 20:40:24.231052       1 main.go:227] handling current node
	I0311 20:40:24.231127       1 main.go:223] Handling node with IPs: map[192.168.39.101:{}]
	I0311 20:40:24.231144       1 main.go:250] Node ha-834040-m02 has CIDR [10.244.1.0/24] 
	I0311 20:40:24.231478       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I0311 20:40:24.231516       1 main.go:250] Node ha-834040-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [eaefaf7c41e62b6bf2975f73ab22408cd0498630eeb0042872545e429387e0db] <==
	I0311 20:35:06.468700       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0311 20:35:06.468871       1 main.go:107] hostIP = 192.168.39.128
	podIP = 192.168.39.128
	I0311 20:35:06.469036       1 main.go:116] setting mtu 1500 for CNI 
	I0311 20:35:06.469055       1 main.go:146] kindnetd IP family: "ipv4"
	I0311 20:35:06.471740       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0311 20:35:09.585387       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0311 20:35:12.653602       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0311 20:35:15.725968       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0311 20:35:18.797456       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0311 20:35:28.134032       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.45:40812->10.96.0.1:443: read: connection reset by peer
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.45:40812->10.96.0.1:443: read: connection reset by peer
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [4d12665eb117c2cc75d85256cf4dd018d8ed2992d5f7c141134a85b41b2a4294] <==
	I0311 20:35:51.891559       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0311 20:35:51.891576       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0311 20:35:51.891596       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0311 20:35:51.891676       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0311 20:35:51.891779       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 20:35:51.976323       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 20:35:51.979479       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 20:35:51.979866       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0311 20:35:51.979933       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0311 20:35:51.982341       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 20:35:51.982405       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 20:35:51.982960       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 20:35:51.987895       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 20:35:51.988003       1 aggregator.go:166] initial CRD sync complete...
	I0311 20:35:51.988042       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 20:35:51.988048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 20:35:51.988054       1 cache.go:39] Caches are synced for autoregister controller
	I0311 20:35:51.997674       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0311 20:35:52.001196       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.40]
	I0311 20:35:52.003331       1 controller.go:624] quota admission added evaluator for: endpoints
	I0311 20:35:52.020640       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0311 20:35:52.026842       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0311 20:35:52.890036       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0311 20:35:53.445228       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.128 192.168.39.40]
	W0311 20:36:03.449214       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.101 192.168.39.128]
	
	
	==> kube-apiserver [9f072720516b73eb54d2f1b36bfaf802e1d1f8c14b6fab73ed78f4e12e4dfc3d] <==
	I0311 20:35:06.425606       1 options.go:220] external host was not specified, using 192.168.39.128
	I0311 20:35:06.432266       1 server.go:148] Version: v1.28.4
	I0311 20:35:06.432451       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:35:07.102858       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0311 20:35:07.109640       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0311 20:35:07.109867       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0311 20:35:07.110200       1 instance.go:298] Using reconciler: lease
	W0311 20:35:27.101607       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0311 20:35:27.102541       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0311 20:35:27.110843       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [a61a17645171f66b7a1858a9482aeee87d6041bfd933d305b1548e3ebfa58800] <==
	I0311 20:37:52.785221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.913µs"
	I0311 20:37:54.167133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.869931ms"
	I0311 20:37:54.167541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.991µs"
	I0311 20:37:54.619358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.717µs"
	I0311 20:37:55.311544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.906µs"
	I0311 20:37:55.342206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.971µs"
	I0311 20:37:55.354945       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.186µs"
	I0311 20:38:07.554806       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-834040-m04"
	I0311 20:38:09.441913       1 event.go:307] "Event occurred" object="ha-834040-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-834040-m03 event: Removing Node ha-834040-m03 from Controller"
	E0311 20:38:24.361740       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:24.361864       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:24.361903       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:24.361929       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:24.361953       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:44.362883       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:44.362944       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:44.362953       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:44.362959       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	E0311 20:38:44.362965       1 gc_controller.go:153] "Failed to get node" err="node \"ha-834040-m03\" not found" node="ha-834040-m03"
	I0311 20:38:44.462036       1 event.go:307] "Event occurred" object="ha-834040-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-834040-m04 status is now: NodeNotReady"
	I0311 20:38:44.479670       1 event.go:307] "Event occurred" object="kube-system/kindnet-gdbjb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:38:44.498746       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-wc99r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:38:44.528954       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-cxkh6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:38:44.573623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="45.058588ms"
	I0311 20:38:44.573743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.861µs"
	
	
	==> kube-controller-manager [f1a69a51bad87e670335840f5e4e47f671ebfb4ee83d1a1be58ee2fe4d9111f1] <==
	I0311 20:35:07.156754       1 serving.go:348] Generated self-signed cert in-memory
	I0311 20:35:07.868312       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0311 20:35:07.868359       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:35:07.870331       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0311 20:35:07.870460       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 20:35:07.870711       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 20:35:07.870860       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0311 20:35:28.117944       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.128:8443/healthz\": dial tcp 192.168.39.128:8443: connect: connection refused"
	
	
	==> kube-proxy [ab5ff27a1d4cb358fb3b3a0a4f4dfe5df4aca314f35a302c79be4d9f895b1a25] <==
	E0311 20:32:03.085770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:03.085707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:03.085820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:10.317586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:10.317685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:10.317586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:10.317717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:10.317800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:10.317860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:21.133545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:21.133653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:21.133546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:21.133687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:24.206833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:24.206967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:39.567055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:39.567347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:42.638253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:42.638350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:32:51.854404       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:32:51.854653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1741": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:33:07.214528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:33:07.214980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1808": dial tcp 192.168.39.254:8443: connect: no route to host
	W0311 20:33:22.574689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:33:22.575060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-834040&resourceVersion=1750": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [f9876035a67109aab2d7ccb01e043938c07a68707f0b5aac080bdc3f86a9a263] <==
	I0311 20:35:07.930788       1 server_others.go:69] "Using iptables proxy"
	E0311 20:35:10.095535       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:35:13.166997       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:35:16.240564       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:35:22.383285       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0311 20:35:34.670318       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-834040": dial tcp 192.168.39.254:8443: connect: no route to host
	I0311 20:35:52.042967       1 node.go:141] Successfully retrieved node IP: 192.168.39.128
	I0311 20:35:52.085004       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:35:52.085151       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:35:52.087925       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:35:52.088044       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:35:52.088409       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:35:52.088446       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:35:52.089756       1 config.go:188] "Starting service config controller"
	I0311 20:35:52.089826       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:35:52.089906       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:35:52.089938       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:35:52.092214       1 config.go:315] "Starting node config controller"
	I0311 20:35:52.092248       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:35:52.190375       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:35:52.190389       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 20:35:52.193180       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [295775061cd270ab219ce780ebeb623bf6f1dedfcd5e5693598e3cb2b65c506d] <==
	W0311 20:35:43.940341       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.128:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:43.940388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.128:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:43.993341       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.128:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:43.993470       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.128:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:44.659027       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.128:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:44.659188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.128:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:45.908877       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.128:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:45.908960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.128:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:46.500888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.128:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:46.501032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.128:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:46.918884       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.128:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:46.918944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.128:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:47.228369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.128:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:47.228471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.128:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:47.788306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:47.788387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:47.912772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.128:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:47.912840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.128:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:48.112507       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:48.112596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:48.190808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.128:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:48.190879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.128:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	W0311 20:35:48.582727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	E0311 20:35:48.582879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.128:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.128:8443: connect: connection refused
	I0311 20:36:09.225023       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4b273e6fedf1a8657c506a055322c245c41196c8e1dce12626b2459bf4c53861] <==
	W0311 20:33:25.629790       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 20:33:25.629877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 20:33:25.648704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 20:33:25.648759       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 20:33:25.843457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 20:33:25.843537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 20:33:25.901179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 20:33:25.901263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 20:33:26.067521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 20:33:26.067582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 20:33:26.524530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 20:33:26.524719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 20:33:26.898799       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 20:33:26.898827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 20:33:27.088978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 20:33:27.089038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 20:33:27.243140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 20:33:27.243239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 20:33:27.393809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 20:33:27.393886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 20:33:27.560746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 20:33:27.560958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 20:33:27.984255       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:33:27.984310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:33:29.142305       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 11 20:36:48 ha-834040 kubelet[1373]: I0311 20:36:48.577769    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:36:48 ha-834040 kubelet[1373]: E0311 20:36:48.578168    1373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbc64228-86a0-4e0c-9eef-f4644439ca13)\"" pod="kube-system/storage-provisioner" podUID="bbc64228-86a0-4e0c-9eef-f4644439ca13"
	Mar 11 20:37:03 ha-834040 kubelet[1373]: I0311 20:37:03.578496    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:37:03 ha-834040 kubelet[1373]: E0311 20:37:03.579189    1373 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bbc64228-86a0-4e0c-9eef-f4644439ca13)\"" pod="kube-system/storage-provisioner" podUID="bbc64228-86a0-4e0c-9eef-f4644439ca13"
	Mar 11 20:37:17 ha-834040 kubelet[1373]: I0311 20:37:17.578310    1373 scope.go:117] "RemoveContainer" containerID="a20030032ebd2a756b14fd27b09feb97d2d1f5c153ffd8fd8386dbbd305044a7"
	Mar 11 20:37:27 ha-834040 kubelet[1373]: E0311 20:37:27.614046    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:37:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:37:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:37:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:37:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:38:27 ha-834040 kubelet[1373]: E0311 20:38:27.614465    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:38:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:38:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:38:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:38:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:39:27 ha-834040 kubelet[1373]: E0311 20:39:27.612958    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:39:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:39:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:39:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:39:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:40:27 ha-834040 kubelet[1373]: E0311 20:40:27.613045    1373 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:40:27 ha-834040 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:40:27 ha-834040 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:40:27 ha-834040 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:40:27 ha-834040 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 20:40:29.501477   35175 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18358-11004/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-834040 -n ha-834040
helpers_test.go:261: (dbg) Run:  kubectl --context ha-834040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/StopCluster (142.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (309.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-232100
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-232100
E0311 20:55:41.982422   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:56:58.809606   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-232100: exit status 82 (2m2.689801656s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-232100-m03"  ...
	* Stopping node "multinode-232100-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-232100" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232100 --wait=true -v=8 --alsologtostderr
E0311 20:57:38.935237   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 21:00:01.853283   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-232100 --wait=true -v=8 --alsologtostderr: (3m4.606207183s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-232100
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-232100 -n multinode-232100
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-232100 logs -n 25: (1.656042075s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m02:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile149036959/001/cp-test_multinode-232100-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m02:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100:/home/docker/cp-test_multinode-232100-m02_multinode-232100.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n multinode-232100 sudo cat                                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /home/docker/cp-test_multinode-232100-m02_multinode-232100.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m02:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03:/home/docker/cp-test_multinode-232100-m02_multinode-232100-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n multinode-232100-m03 sudo cat                                   | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /home/docker/cp-test_multinode-232100-m02_multinode-232100-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp testdata/cp-test.txt                                                | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile149036959/001/cp-test_multinode-232100-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100:/home/docker/cp-test_multinode-232100-m03_multinode-232100.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n multinode-232100 sudo cat                                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /home/docker/cp-test_multinode-232100-m03_multinode-232100.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02:/home/docker/cp-test_multinode-232100-m03_multinode-232100-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n multinode-232100-m02 sudo cat                                   | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /home/docker/cp-test_multinode-232100-m03_multinode-232100-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-232100 node stop m03                                                          | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	| node    | multinode-232100 node start                                                             | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:55 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-232100                                                                | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:55 UTC |                     |
	| stop    | -p multinode-232100                                                                     | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:55 UTC |                     |
	| start   | -p multinode-232100                                                                     | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:57 UTC | 11 Mar 24 21:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-232100                                                                | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 21:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:57:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:57:12.643792   43208 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:57:12.644056   43208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:57:12.644065   43208 out.go:304] Setting ErrFile to fd 2...
	I0311 20:57:12.644069   43208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:57:12.644241   43208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:57:12.644728   43208 out.go:298] Setting JSON to false
	I0311 20:57:12.645636   43208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5982,"bootTime":1710184651,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:57:12.645695   43208 start.go:139] virtualization: kvm guest
	I0311 20:57:12.648022   43208 out.go:177] * [multinode-232100] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:57:12.649372   43208 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:57:12.650651   43208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:57:12.649374   43208 notify.go:220] Checking for updates...
	I0311 20:57:12.652097   43208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:57:12.653465   43208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:57:12.654752   43208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:57:12.656138   43208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:57:12.658068   43208 config.go:182] Loaded profile config "multinode-232100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:57:12.658156   43208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:57:12.658536   43208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:57:12.658579   43208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:57:12.673100   43208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34003
	I0311 20:57:12.673499   43208 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:57:12.674138   43208 main.go:141] libmachine: Using API Version  1
	I0311 20:57:12.674184   43208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:57:12.674551   43208 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:57:12.674743   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:57:12.709161   43208 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 20:57:12.710564   43208 start.go:297] selected driver: kvm2
	I0311 20:57:12.710581   43208 start.go:901] validating driver "kvm2" against &{Name:multinode-232100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-232100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:57:12.710694   43208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:57:12.710992   43208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:57:12.711054   43208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 20:57:12.726360   43208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 20:57:12.727011   43208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:57:12.727042   43208 cni.go:84] Creating CNI manager for ""
	I0311 20:57:12.727049   43208 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0311 20:57:12.727112   43208 start.go:340] cluster config:
	{Name:multinode-232100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-232100 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:57:12.727219   43208 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:57:12.728858   43208 out.go:177] * Starting "multinode-232100" primary control-plane node in "multinode-232100" cluster
	I0311 20:57:12.730037   43208 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:57:12.730064   43208 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 20:57:12.730077   43208 cache.go:56] Caching tarball of preloaded images
	I0311 20:57:12.730156   43208 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:57:12.730170   43208 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:57:12.730314   43208 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/config.json ...
	I0311 20:57:12.730535   43208 start.go:360] acquireMachinesLock for multinode-232100: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:57:12.730585   43208 start.go:364] duration metric: took 31.267µs to acquireMachinesLock for "multinode-232100"
	I0311 20:57:12.730604   43208 start.go:96] Skipping create...Using existing machine configuration
	I0311 20:57:12.730613   43208 fix.go:54] fixHost starting: 
	I0311 20:57:12.730939   43208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:57:12.730973   43208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:57:12.743949   43208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0311 20:57:12.744357   43208 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:57:12.744818   43208 main.go:141] libmachine: Using API Version  1
	I0311 20:57:12.744842   43208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:57:12.745200   43208 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:57:12.745419   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:57:12.745599   43208 main.go:141] libmachine: (multinode-232100) Calling .GetState
	I0311 20:57:12.747199   43208 fix.go:112] recreateIfNeeded on multinode-232100: state=Running err=<nil>
	W0311 20:57:12.747215   43208 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 20:57:12.749788   43208 out.go:177] * Updating the running kvm2 "multinode-232100" VM ...
	I0311 20:57:12.751148   43208 machine.go:94] provisionDockerMachine start ...
	I0311 20:57:12.751162   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:57:12.751352   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:12.754082   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:12.754467   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:12.754494   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:12.754632   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:12.754807   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:12.754962   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:12.755081   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:12.755229   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:57:12.755452   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:57:12.755469   43208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 20:57:12.867207   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232100
	
	I0311 20:57:12.867232   43208 main.go:141] libmachine: (multinode-232100) Calling .GetMachineName
	I0311 20:57:12.867472   43208 buildroot.go:166] provisioning hostname "multinode-232100"
	I0311 20:57:12.867501   43208 main.go:141] libmachine: (multinode-232100) Calling .GetMachineName
	I0311 20:57:12.867669   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:12.870123   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:12.870478   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:12.870505   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:12.870685   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:12.870887   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:12.871034   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:12.871171   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:12.871311   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:57:12.871448   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:57:12.871460   43208 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-232100 && echo "multinode-232100" | sudo tee /etc/hostname
	I0311 20:57:12.997275   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232100
	
	I0311 20:57:12.997302   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:13.000031   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.000378   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.000407   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.000581   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:13.000762   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.000936   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.001081   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:13.001236   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:57:13.001402   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:57:13.001419   43208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-232100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-232100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-232100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:57:13.110286   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:57:13.110315   43208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:57:13.110383   43208 buildroot.go:174] setting up certificates
	I0311 20:57:13.110393   43208 provision.go:84] configureAuth start
	I0311 20:57:13.110402   43208 main.go:141] libmachine: (multinode-232100) Calling .GetMachineName
	I0311 20:57:13.110662   43208 main.go:141] libmachine: (multinode-232100) Calling .GetIP
	I0311 20:57:13.113179   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.113521   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.113546   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.113718   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:13.115812   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.116182   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.116214   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.116325   43208 provision.go:143] copyHostCerts
	I0311 20:57:13.116356   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:57:13.116391   43208 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:57:13.116401   43208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:57:13.116466   43208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:57:13.116548   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:57:13.116566   43208 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:57:13.116570   43208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:57:13.116593   43208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:57:13.116648   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:57:13.116669   43208 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:57:13.116676   43208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:57:13.116697   43208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:57:13.116776   43208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.multinode-232100 san=[127.0.0.1 192.168.39.134 localhost minikube multinode-232100]
	I0311 20:57:13.487482   43208 provision.go:177] copyRemoteCerts
	I0311 20:57:13.487536   43208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:57:13.487558   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:13.490067   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.490382   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.490408   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.490593   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:13.490789   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.490931   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:13.491061   43208 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:57:13.581296   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:57:13.581361   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:57:13.610600   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:57:13.610654   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0311 20:57:13.637911   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:57:13.637962   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 20:57:13.664942   43208 provision.go:87] duration metric: took 554.538819ms to configureAuth
	I0311 20:57:13.664966   43208 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:57:13.665169   43208 config.go:182] Loaded profile config "multinode-232100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:57:13.665231   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:13.667769   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.668145   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.668191   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.668324   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:13.668526   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.668667   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.668800   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:13.668944   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:57:13.669122   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:57:13.669137   43208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:58:44.389199   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:58:44.389228   43208 machine.go:97] duration metric: took 1m31.638069174s to provisionDockerMachine
	I0311 20:58:44.389241   43208 start.go:293] postStartSetup for "multinode-232100" (driver="kvm2")
	I0311 20:58:44.389251   43208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:58:44.389267   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.389600   43208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:58:44.389635   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:58:44.392857   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.393275   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.393304   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.393447   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:58:44.393628   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.393791   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:58:44.393935   43208 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:58:44.477468   43208 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:58:44.481878   43208 command_runner.go:130] > NAME=Buildroot
	I0311 20:58:44.481893   43208 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0311 20:58:44.481897   43208 command_runner.go:130] > ID=buildroot
	I0311 20:58:44.481910   43208 command_runner.go:130] > VERSION_ID=2023.02.9
	I0311 20:58:44.481916   43208 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0311 20:58:44.482187   43208 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:58:44.482204   43208 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:58:44.482262   43208 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:58:44.482355   43208 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:58:44.482374   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:58:44.482458   43208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:58:44.493122   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:58:44.519005   43208 start.go:296] duration metric: took 129.752749ms for postStartSetup
	I0311 20:58:44.519071   43208 fix.go:56] duration metric: took 1m31.78845688s for fixHost
	I0311 20:58:44.519099   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:58:44.521496   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.521835   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.521863   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.521977   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:58:44.522161   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.522326   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.522464   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:58:44.522625   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:58:44.522771   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:58:44.522783   43208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:58:44.625720   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710190724.606533979
	
	I0311 20:58:44.625740   43208 fix.go:216] guest clock: 1710190724.606533979
	I0311 20:58:44.625749   43208 fix.go:229] Guest: 2024-03-11 20:58:44.606533979 +0000 UTC Remote: 2024-03-11 20:58:44.519082181 +0000 UTC m=+91.921532697 (delta=87.451798ms)
	I0311 20:58:44.625792   43208 fix.go:200] guest clock delta is within tolerance: 87.451798ms
	I0311 20:58:44.625798   43208 start.go:83] releasing machines lock for "multinode-232100", held for 1m31.895201285s
	I0311 20:58:44.625849   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.626123   43208 main.go:141] libmachine: (multinode-232100) Calling .GetIP
	I0311 20:58:44.628318   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.628775   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.628817   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.628967   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.629515   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.629689   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.629760   43208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:58:44.629818   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:58:44.629918   43208 ssh_runner.go:195] Run: cat /version.json
	I0311 20:58:44.629946   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:58:44.632160   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.632472   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.632499   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.632518   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.632622   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:58:44.632815   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.632935   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.632953   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.632971   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:58:44.633143   43208 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:58:44.633212   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:58:44.633355   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.633492   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:58:44.633629   43208 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:58:44.729119   43208 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0311 20:58:44.729811   43208 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0311 20:58:44.729956   43208 ssh_runner.go:195] Run: systemctl --version
	I0311 20:58:44.736257   43208 command_runner.go:130] > systemd 252 (252)
	I0311 20:58:44.736300   43208 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0311 20:58:44.736358   43208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:58:44.903779   43208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 20:58:44.912761   43208 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0311 20:58:44.913320   43208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:58:44.913383   43208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:58:44.923445   43208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 20:58:44.923465   43208 start.go:494] detecting cgroup driver to use...
	I0311 20:58:44.923520   43208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:58:44.941102   43208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:58:44.955088   43208 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:58:44.955127   43208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:58:44.970691   43208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:58:44.986246   43208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:58:45.136855   43208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:58:45.282417   43208 docker.go:233] disabling docker service ...
	I0311 20:58:45.282504   43208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:58:45.301648   43208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:58:45.315745   43208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:58:45.456271   43208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:58:45.608200   43208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:58:45.625497   43208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:58:45.648101   43208 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0311 20:58:45.648562   43208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:58:45.648615   43208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:58:45.659704   43208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:58:45.659761   43208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:58:45.671881   43208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:58:45.683461   43208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:58:45.695287   43208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:58:45.706500   43208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:58:45.716059   43208 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0311 20:58:45.716212   43208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:58:45.726043   43208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:58:45.867507   43208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:58:46.110860   43208 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:58:46.110937   43208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:58:46.117051   43208 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0311 20:58:46.117069   43208 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0311 20:58:46.117075   43208 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0311 20:58:46.117084   43208 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0311 20:58:46.117092   43208 command_runner.go:130] > Access: 2024-03-11 20:58:45.990375949 +0000
	I0311 20:58:46.117102   43208 command_runner.go:130] > Modify: 2024-03-11 20:58:45.981375587 +0000
	I0311 20:58:46.117111   43208 command_runner.go:130] > Change: 2024-03-11 20:58:45.981375587 +0000
	I0311 20:58:46.117122   43208 command_runner.go:130] >  Birth: -
	I0311 20:58:46.117265   43208 start.go:562] Will wait 60s for crictl version
	I0311 20:58:46.117306   43208 ssh_runner.go:195] Run: which crictl
	I0311 20:58:46.121613   43208 command_runner.go:130] > /usr/bin/crictl
	I0311 20:58:46.121656   43208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:58:46.160039   43208 command_runner.go:130] > Version:  0.1.0
	I0311 20:58:46.160061   43208 command_runner.go:130] > RuntimeName:  cri-o
	I0311 20:58:46.160069   43208 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0311 20:58:46.160076   43208 command_runner.go:130] > RuntimeApiVersion:  v1
	I0311 20:58:46.160099   43208 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:58:46.160158   43208 ssh_runner.go:195] Run: crio --version
	I0311 20:58:46.189699   43208 command_runner.go:130] > crio version 1.29.1
	I0311 20:58:46.189721   43208 command_runner.go:130] > Version:        1.29.1
	I0311 20:58:46.189726   43208 command_runner.go:130] > GitCommit:      unknown
	I0311 20:58:46.189730   43208 command_runner.go:130] > GitCommitDate:  unknown
	I0311 20:58:46.189744   43208 command_runner.go:130] > GitTreeState:   clean
	I0311 20:58:46.189750   43208 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0311 20:58:46.189755   43208 command_runner.go:130] > GoVersion:      go1.21.6
	I0311 20:58:46.189759   43208 command_runner.go:130] > Compiler:       gc
	I0311 20:58:46.189766   43208 command_runner.go:130] > Platform:       linux/amd64
	I0311 20:58:46.189770   43208 command_runner.go:130] > Linkmode:       dynamic
	I0311 20:58:46.189774   43208 command_runner.go:130] > BuildTags:      
	I0311 20:58:46.189779   43208 command_runner.go:130] >   containers_image_ostree_stub
	I0311 20:58:46.189783   43208 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0311 20:58:46.189787   43208 command_runner.go:130] >   btrfs_noversion
	I0311 20:58:46.189794   43208 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0311 20:58:46.189800   43208 command_runner.go:130] >   libdm_no_deferred_remove
	I0311 20:58:46.189805   43208 command_runner.go:130] >   seccomp
	I0311 20:58:46.189812   43208 command_runner.go:130] > LDFlags:          unknown
	I0311 20:58:46.189818   43208 command_runner.go:130] > SeccompEnabled:   true
	I0311 20:58:46.189828   43208 command_runner.go:130] > AppArmorEnabled:  false
	I0311 20:58:46.191233   43208 ssh_runner.go:195] Run: crio --version
	I0311 20:58:46.221405   43208 command_runner.go:130] > crio version 1.29.1
	I0311 20:58:46.221425   43208 command_runner.go:130] > Version:        1.29.1
	I0311 20:58:46.221432   43208 command_runner.go:130] > GitCommit:      unknown
	I0311 20:58:46.221436   43208 command_runner.go:130] > GitCommitDate:  unknown
	I0311 20:58:46.221440   43208 command_runner.go:130] > GitTreeState:   clean
	I0311 20:58:46.221447   43208 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0311 20:58:46.221453   43208 command_runner.go:130] > GoVersion:      go1.21.6
	I0311 20:58:46.221460   43208 command_runner.go:130] > Compiler:       gc
	I0311 20:58:46.221487   43208 command_runner.go:130] > Platform:       linux/amd64
	I0311 20:58:46.221498   43208 command_runner.go:130] > Linkmode:       dynamic
	I0311 20:58:46.221502   43208 command_runner.go:130] > BuildTags:      
	I0311 20:58:46.221506   43208 command_runner.go:130] >   containers_image_ostree_stub
	I0311 20:58:46.221511   43208 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0311 20:58:46.221516   43208 command_runner.go:130] >   btrfs_noversion
	I0311 20:58:46.221520   43208 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0311 20:58:46.221527   43208 command_runner.go:130] >   libdm_no_deferred_remove
	I0311 20:58:46.221530   43208 command_runner.go:130] >   seccomp
	I0311 20:58:46.221534   43208 command_runner.go:130] > LDFlags:          unknown
	I0311 20:58:46.221542   43208 command_runner.go:130] > SeccompEnabled:   true
	I0311 20:58:46.221549   43208 command_runner.go:130] > AppArmorEnabled:  false
	I0311 20:58:46.225382   43208 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:58:46.226927   43208 main.go:141] libmachine: (multinode-232100) Calling .GetIP
	I0311 20:58:46.229474   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:46.229844   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:46.229872   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:46.230083   43208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:58:46.234716   43208 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0311 20:58:46.234788   43208 kubeadm.go:877] updating cluster {Name:multinode-232100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-232100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 20:58:46.234903   43208 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:58:46.234950   43208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:58:46.285359   43208 command_runner.go:130] > {
	I0311 20:58:46.285380   43208 command_runner.go:130] >   "images": [
	I0311 20:58:46.285384   43208 command_runner.go:130] >     {
	I0311 20:58:46.285395   43208 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0311 20:58:46.285409   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285418   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0311 20:58:46.285423   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285427   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.285437   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0311 20:58:46.285451   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0311 20:58:46.285461   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285472   43208 command_runner.go:130] >       "size": "65258016",
	I0311 20:58:46.285483   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.285489   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.285502   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.285509   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.285512   43208 command_runner.go:130] >     },
	I0311 20:58:46.285515   43208 command_runner.go:130] >     {
	I0311 20:58:46.285524   43208 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0311 20:58:46.285533   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285545   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0311 20:58:46.285555   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285565   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.285579   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0311 20:58:46.285589   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0311 20:58:46.285595   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285600   43208 command_runner.go:130] >       "size": "65291810",
	I0311 20:58:46.285606   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.285612   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.285620   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.285630   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.285639   43208 command_runner.go:130] >     },
	I0311 20:58:46.285654   43208 command_runner.go:130] >     {
	I0311 20:58:46.285666   43208 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0311 20:58:46.285676   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285686   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0311 20:58:46.285692   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285696   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.285707   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0311 20:58:46.285722   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0311 20:58:46.285738   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285749   43208 command_runner.go:130] >       "size": "1363676",
	I0311 20:58:46.285758   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.285768   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.285777   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.285784   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.285787   43208 command_runner.go:130] >     },
	I0311 20:58:46.285797   43208 command_runner.go:130] >     {
	I0311 20:58:46.285807   43208 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0311 20:58:46.285817   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285828   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0311 20:58:46.285837   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285846   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.285861   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0311 20:58:46.285882   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0311 20:58:46.285893   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285901   43208 command_runner.go:130] >       "size": "31470524",
	I0311 20:58:46.285907   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.285917   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.285926   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.285936   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.285944   43208 command_runner.go:130] >     },
	I0311 20:58:46.285952   43208 command_runner.go:130] >     {
	I0311 20:58:46.285960   43208 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0311 20:58:46.285967   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285979   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0311 20:58:46.285989   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285999   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286011   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0311 20:58:46.286026   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0311 20:58:46.286035   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286043   43208 command_runner.go:130] >       "size": "53621675",
	I0311 20:58:46.286047   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.286056   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286065   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286076   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286091   43208 command_runner.go:130] >     },
	I0311 20:58:46.286100   43208 command_runner.go:130] >     {
	I0311 20:58:46.286111   43208 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0311 20:58:46.286121   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286130   43208 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0311 20:58:46.286137   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286143   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286157   43208 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0311 20:58:46.286172   43208 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0311 20:58:46.286182   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286191   43208 command_runner.go:130] >       "size": "295456551",
	I0311 20:58:46.286201   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286209   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.286217   43208 command_runner.go:130] >       },
	I0311 20:58:46.286225   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286229   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286238   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286247   43208 command_runner.go:130] >     },
	I0311 20:58:46.286256   43208 command_runner.go:130] >     {
	I0311 20:58:46.286269   43208 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0311 20:58:46.286278   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286289   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0311 20:58:46.286298   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286307   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286317   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0311 20:58:46.286329   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0311 20:58:46.286339   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286349   43208 command_runner.go:130] >       "size": "127226832",
	I0311 20:58:46.286358   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286367   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.286375   43208 command_runner.go:130] >       },
	I0311 20:58:46.286385   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286394   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286401   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286404   43208 command_runner.go:130] >     },
	I0311 20:58:46.286412   43208 command_runner.go:130] >     {
	I0311 20:58:46.286429   43208 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0311 20:58:46.286439   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286447   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0311 20:58:46.286453   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286459   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286487   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0311 20:58:46.286498   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0311 20:58:46.286503   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286510   43208 command_runner.go:130] >       "size": "123261750",
	I0311 20:58:46.286516   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286521   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.286527   43208 command_runner.go:130] >       },
	I0311 20:58:46.286533   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286539   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286548   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286552   43208 command_runner.go:130] >     },
	I0311 20:58:46.286557   43208 command_runner.go:130] >     {
	I0311 20:58:46.286566   43208 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0311 20:58:46.286573   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286579   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0311 20:58:46.286585   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286592   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286601   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0311 20:58:46.286611   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0311 20:58:46.286616   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286622   43208 command_runner.go:130] >       "size": "74749335",
	I0311 20:58:46.286627   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.286634   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286639   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286655   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286661   43208 command_runner.go:130] >     },
	I0311 20:58:46.286666   43208 command_runner.go:130] >     {
	I0311 20:58:46.286676   43208 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0311 20:58:46.286682   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286690   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0311 20:58:46.286696   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286711   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286723   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0311 20:58:46.286730   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0311 20:58:46.286736   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286740   43208 command_runner.go:130] >       "size": "61551410",
	I0311 20:58:46.286743   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286747   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.286751   43208 command_runner.go:130] >       },
	I0311 20:58:46.286755   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286758   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286763   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286766   43208 command_runner.go:130] >     },
	I0311 20:58:46.286769   43208 command_runner.go:130] >     {
	I0311 20:58:46.286775   43208 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0311 20:58:46.286780   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286784   43208 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0311 20:58:46.286787   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286791   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286798   43208 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0311 20:58:46.286805   43208 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0311 20:58:46.286808   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286820   43208 command_runner.go:130] >       "size": "750414",
	I0311 20:58:46.286823   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286827   43208 command_runner.go:130] >         "value": "65535"
	I0311 20:58:46.286830   43208 command_runner.go:130] >       },
	I0311 20:58:46.286834   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286841   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286845   43208 command_runner.go:130] >       "pinned": true
	I0311 20:58:46.286848   43208 command_runner.go:130] >     }
	I0311 20:58:46.286851   43208 command_runner.go:130] >   ]
	I0311 20:58:46.286854   43208 command_runner.go:130] > }
	I0311 20:58:46.287020   43208 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:58:46.287030   43208 crio.go:415] Images already preloaded, skipping extraction
	I0311 20:58:46.287067   43208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:58:46.322073   43208 command_runner.go:130] > {
	I0311 20:58:46.322103   43208 command_runner.go:130] >   "images": [
	I0311 20:58:46.322111   43208 command_runner.go:130] >     {
	I0311 20:58:46.322120   43208 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0311 20:58:46.322126   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322132   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0311 20:58:46.322136   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322140   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322151   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0311 20:58:46.322160   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0311 20:58:46.322166   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322171   43208 command_runner.go:130] >       "size": "65258016",
	I0311 20:58:46.322175   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322179   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322184   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322193   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322197   43208 command_runner.go:130] >     },
	I0311 20:58:46.322201   43208 command_runner.go:130] >     {
	I0311 20:58:46.322209   43208 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0311 20:58:46.322213   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322218   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0311 20:58:46.322222   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322227   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322234   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0311 20:58:46.322241   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0311 20:58:46.322247   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322250   43208 command_runner.go:130] >       "size": "65291810",
	I0311 20:58:46.322257   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322264   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322270   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322274   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322280   43208 command_runner.go:130] >     },
	I0311 20:58:46.322284   43208 command_runner.go:130] >     {
	I0311 20:58:46.322292   43208 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0311 20:58:46.322298   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322303   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0311 20:58:46.322309   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322318   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322327   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0311 20:58:46.322337   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0311 20:58:46.322342   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322346   43208 command_runner.go:130] >       "size": "1363676",
	I0311 20:58:46.322352   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322356   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322362   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322366   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322372   43208 command_runner.go:130] >     },
	I0311 20:58:46.322376   43208 command_runner.go:130] >     {
	I0311 20:58:46.322384   43208 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0311 20:58:46.322388   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322396   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0311 20:58:46.322399   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322405   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322412   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0311 20:58:46.322426   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0311 20:58:46.322432   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322436   43208 command_runner.go:130] >       "size": "31470524",
	I0311 20:58:46.322442   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322446   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322452   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322456   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322461   43208 command_runner.go:130] >     },
	I0311 20:58:46.322465   43208 command_runner.go:130] >     {
	I0311 20:58:46.322473   43208 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0311 20:58:46.322478   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322485   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0311 20:58:46.322488   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322495   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322502   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0311 20:58:46.322511   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0311 20:58:46.322517   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322521   43208 command_runner.go:130] >       "size": "53621675",
	I0311 20:58:46.322527   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322535   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322541   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322545   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322551   43208 command_runner.go:130] >     },
	I0311 20:58:46.322554   43208 command_runner.go:130] >     {
	I0311 20:58:46.322563   43208 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0311 20:58:46.322569   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322574   43208 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0311 20:58:46.322579   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322584   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322593   43208 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0311 20:58:46.322601   43208 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0311 20:58:46.322607   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322611   43208 command_runner.go:130] >       "size": "295456551",
	I0311 20:58:46.322617   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.322621   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.322627   43208 command_runner.go:130] >       },
	I0311 20:58:46.322630   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322636   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322642   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322647   43208 command_runner.go:130] >     },
	I0311 20:58:46.322650   43208 command_runner.go:130] >     {
	I0311 20:58:46.322656   43208 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0311 20:58:46.322662   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322667   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0311 20:58:46.322673   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322677   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322686   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0311 20:58:46.322695   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0311 20:58:46.322701   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322705   43208 command_runner.go:130] >       "size": "127226832",
	I0311 20:58:46.322711   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.322716   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.322720   43208 command_runner.go:130] >       },
	I0311 20:58:46.322726   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322730   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322741   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322746   43208 command_runner.go:130] >     },
	I0311 20:58:46.322750   43208 command_runner.go:130] >     {
	I0311 20:58:46.322758   43208 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0311 20:58:46.322765   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322770   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0311 20:58:46.322775   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322780   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322825   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0311 20:58:46.322838   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0311 20:58:46.322842   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322846   43208 command_runner.go:130] >       "size": "123261750",
	I0311 20:58:46.322851   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.322860   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.322869   43208 command_runner.go:130] >       },
	I0311 20:58:46.322879   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322886   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322890   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322896   43208 command_runner.go:130] >     },
	I0311 20:58:46.322899   43208 command_runner.go:130] >     {
	I0311 20:58:46.322908   43208 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0311 20:58:46.322914   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322919   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0311 20:58:46.322925   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322929   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322939   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0311 20:58:46.322953   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0311 20:58:46.322962   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322969   43208 command_runner.go:130] >       "size": "74749335",
	I0311 20:58:46.322979   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322989   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322998   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.323002   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.323006   43208 command_runner.go:130] >     },
	I0311 20:58:46.323012   43208 command_runner.go:130] >     {
	I0311 20:58:46.323018   43208 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0311 20:58:46.323028   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.323036   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0311 20:58:46.323042   43208 command_runner.go:130] >       ],
	I0311 20:58:46.323048   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.323063   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0311 20:58:46.323078   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0311 20:58:46.323087   43208 command_runner.go:130] >       ],
	I0311 20:58:46.323099   43208 command_runner.go:130] >       "size": "61551410",
	I0311 20:58:46.323108   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.323115   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.323119   43208 command_runner.go:130] >       },
	I0311 20:58:46.323122   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.323129   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.323133   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.323139   43208 command_runner.go:130] >     },
	I0311 20:58:46.323143   43208 command_runner.go:130] >     {
	I0311 20:58:46.323153   43208 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0311 20:58:46.323162   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.323173   43208 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0311 20:58:46.323179   43208 command_runner.go:130] >       ],
	I0311 20:58:46.323193   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.323207   43208 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0311 20:58:46.323221   43208 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0311 20:58:46.323230   43208 command_runner.go:130] >       ],
	I0311 20:58:46.323238   43208 command_runner.go:130] >       "size": "750414",
	I0311 20:58:46.323242   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.323246   43208 command_runner.go:130] >         "value": "65535"
	I0311 20:58:46.323250   43208 command_runner.go:130] >       },
	I0311 20:58:46.323259   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.323268   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.323278   43208 command_runner.go:130] >       "pinned": true
	I0311 20:58:46.323286   43208 command_runner.go:130] >     }
	I0311 20:58:46.323294   43208 command_runner.go:130] >   ]
	I0311 20:58:46.323299   43208 command_runner.go:130] > }
	I0311 20:58:46.323449   43208 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:58:46.323464   43208 cache_images.go:84] Images are preloaded, skipping loading
	I0311 20:58:46.323472   43208 kubeadm.go:928] updating node { 192.168.39.134 8443 v1.28.4 crio true true} ...
	I0311 20:58:46.323584   43208 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-232100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-232100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:58:46.323667   43208 ssh_runner.go:195] Run: crio config
	I0311 20:58:46.363856   43208 command_runner.go:130] ! time="2024-03-11 20:58:46.344746966Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0311 20:58:46.369273   43208 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0311 20:58:46.380323   43208 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0311 20:58:46.380343   43208 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0311 20:58:46.380353   43208 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0311 20:58:46.380358   43208 command_runner.go:130] > #
	I0311 20:58:46.380373   43208 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0311 20:58:46.380385   43208 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0311 20:58:46.380393   43208 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0311 20:58:46.380400   43208 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0311 20:58:46.380406   43208 command_runner.go:130] > # reload'.
	I0311 20:58:46.380413   43208 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0311 20:58:46.380421   43208 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0311 20:58:46.380428   43208 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0311 20:58:46.380436   43208 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0311 20:58:46.380442   43208 command_runner.go:130] > [crio]
	I0311 20:58:46.380447   43208 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0311 20:58:46.380454   43208 command_runner.go:130] > # containers images, in this directory.
	I0311 20:58:46.380459   43208 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0311 20:58:46.380470   43208 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0311 20:58:46.380478   43208 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0311 20:58:46.380485   43208 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0311 20:58:46.380491   43208 command_runner.go:130] > # imagestore = ""
	I0311 20:58:46.380497   43208 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0311 20:58:46.380506   43208 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0311 20:58:46.380517   43208 command_runner.go:130] > storage_driver = "overlay"
	I0311 20:58:46.380524   43208 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0311 20:58:46.380533   43208 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0311 20:58:46.380537   43208 command_runner.go:130] > storage_option = [
	I0311 20:58:46.380544   43208 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0311 20:58:46.380547   43208 command_runner.go:130] > ]
	I0311 20:58:46.380553   43208 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0311 20:58:46.380561   43208 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0311 20:58:46.380568   43208 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0311 20:58:46.380574   43208 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0311 20:58:46.380581   43208 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0311 20:58:46.380586   43208 command_runner.go:130] > # always happen on a node reboot
	I0311 20:58:46.380593   43208 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0311 20:58:46.380603   43208 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0311 20:58:46.380612   43208 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0311 20:58:46.380620   43208 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0311 20:58:46.380627   43208 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0311 20:58:46.380634   43208 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0311 20:58:46.380644   43208 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0311 20:58:46.380650   43208 command_runner.go:130] > # internal_wipe = true
	I0311 20:58:46.380657   43208 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0311 20:58:46.380665   43208 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0311 20:58:46.380672   43208 command_runner.go:130] > # internal_repair = false
	I0311 20:58:46.380678   43208 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0311 20:58:46.380691   43208 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0311 20:58:46.380699   43208 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0311 20:58:46.380705   43208 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0311 20:58:46.380713   43208 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0311 20:58:46.380719   43208 command_runner.go:130] > [crio.api]
	I0311 20:58:46.380725   43208 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0311 20:58:46.380732   43208 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0311 20:58:46.380759   43208 command_runner.go:130] > # IP address on which the stream server will listen.
	I0311 20:58:46.380766   43208 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0311 20:58:46.380773   43208 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0311 20:58:46.380780   43208 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0311 20:58:46.380785   43208 command_runner.go:130] > # stream_port = "0"
	I0311 20:58:46.380800   43208 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0311 20:58:46.380806   43208 command_runner.go:130] > # stream_enable_tls = false
	I0311 20:58:46.380812   43208 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0311 20:58:46.380818   43208 command_runner.go:130] > # stream_idle_timeout = ""
	I0311 20:58:46.380824   43208 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0311 20:58:46.380833   43208 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0311 20:58:46.380836   43208 command_runner.go:130] > # minutes.
	I0311 20:58:46.380843   43208 command_runner.go:130] > # stream_tls_cert = ""
	I0311 20:58:46.380848   43208 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0311 20:58:46.380854   43208 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0311 20:58:46.380860   43208 command_runner.go:130] > # stream_tls_key = ""
	I0311 20:58:46.380865   43208 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0311 20:58:46.380873   43208 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0311 20:58:46.380893   43208 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0311 20:58:46.380899   43208 command_runner.go:130] > # stream_tls_ca = ""
	I0311 20:58:46.380906   43208 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0311 20:58:46.380913   43208 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0311 20:58:46.380920   43208 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0311 20:58:46.380926   43208 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0311 20:58:46.380932   43208 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0311 20:58:46.380940   43208 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0311 20:58:46.380946   43208 command_runner.go:130] > [crio.runtime]
	I0311 20:58:46.380952   43208 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0311 20:58:46.380960   43208 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0311 20:58:46.380966   43208 command_runner.go:130] > # "nofile=1024:2048"
	I0311 20:58:46.380973   43208 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0311 20:58:46.380980   43208 command_runner.go:130] > # default_ulimits = [
	I0311 20:58:46.380983   43208 command_runner.go:130] > # ]
	I0311 20:58:46.380989   43208 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0311 20:58:46.380995   43208 command_runner.go:130] > # no_pivot = false
	I0311 20:58:46.381001   43208 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0311 20:58:46.381009   43208 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0311 20:58:46.381016   43208 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0311 20:58:46.381025   43208 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0311 20:58:46.381030   43208 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0311 20:58:46.381038   43208 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0311 20:58:46.381050   43208 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0311 20:58:46.381057   43208 command_runner.go:130] > # Cgroup setting for conmon
	I0311 20:58:46.381063   43208 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0311 20:58:46.381070   43208 command_runner.go:130] > conmon_cgroup = "pod"
	I0311 20:58:46.381076   43208 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0311 20:58:46.381083   43208 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0311 20:58:46.381090   43208 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0311 20:58:46.381096   43208 command_runner.go:130] > conmon_env = [
	I0311 20:58:46.381101   43208 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0311 20:58:46.381109   43208 command_runner.go:130] > ]
	I0311 20:58:46.381114   43208 command_runner.go:130] > # Additional environment variables to set for all the
	I0311 20:58:46.381119   43208 command_runner.go:130] > # containers. These are overridden if set in the
	I0311 20:58:46.381127   43208 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0311 20:58:46.381131   43208 command_runner.go:130] > # default_env = [
	I0311 20:58:46.381134   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381139   43208 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0311 20:58:46.381146   43208 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0311 20:58:46.381150   43208 command_runner.go:130] > # selinux = false
	I0311 20:58:46.381156   43208 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0311 20:58:46.381161   43208 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0311 20:58:46.381169   43208 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0311 20:58:46.381173   43208 command_runner.go:130] > # seccomp_profile = ""
	I0311 20:58:46.381181   43208 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0311 20:58:46.381186   43208 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0311 20:58:46.381194   43208 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0311 20:58:46.381201   43208 command_runner.go:130] > # which might increase security.
	I0311 20:58:46.381205   43208 command_runner.go:130] > # This option is currently deprecated,
	I0311 20:58:46.381214   43208 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0311 20:58:46.381221   43208 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0311 20:58:46.381227   43208 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0311 20:58:46.381236   43208 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0311 20:58:46.381242   43208 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0311 20:58:46.381250   43208 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0311 20:58:46.381257   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.381262   43208 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0311 20:58:46.381270   43208 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0311 20:58:46.381279   43208 command_runner.go:130] > # the cgroup blockio controller.
	I0311 20:58:46.381285   43208 command_runner.go:130] > # blockio_config_file = ""
	I0311 20:58:46.381292   43208 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0311 20:58:46.381298   43208 command_runner.go:130] > # blockio parameters.
	I0311 20:58:46.381301   43208 command_runner.go:130] > # blockio_reload = false
	I0311 20:58:46.381310   43208 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0311 20:58:46.381316   43208 command_runner.go:130] > # irqbalance daemon.
	I0311 20:58:46.381321   43208 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0311 20:58:46.381329   43208 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0311 20:58:46.381338   43208 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0311 20:58:46.381344   43208 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0311 20:58:46.381352   43208 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0311 20:58:46.381358   43208 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0311 20:58:46.381365   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.381369   43208 command_runner.go:130] > # rdt_config_file = ""
	I0311 20:58:46.381376   43208 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0311 20:58:46.381380   43208 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0311 20:58:46.381409   43208 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0311 20:58:46.381418   43208 command_runner.go:130] > # separate_pull_cgroup = ""
	I0311 20:58:46.381423   43208 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0311 20:58:46.381429   43208 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0311 20:58:46.381434   43208 command_runner.go:130] > # will be added.
	I0311 20:58:46.381438   43208 command_runner.go:130] > # default_capabilities = [
	I0311 20:58:46.381445   43208 command_runner.go:130] > # 	"CHOWN",
	I0311 20:58:46.381448   43208 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0311 20:58:46.381454   43208 command_runner.go:130] > # 	"FSETID",
	I0311 20:58:46.381458   43208 command_runner.go:130] > # 	"FOWNER",
	I0311 20:58:46.381462   43208 command_runner.go:130] > # 	"SETGID",
	I0311 20:58:46.381465   43208 command_runner.go:130] > # 	"SETUID",
	I0311 20:58:46.381469   43208 command_runner.go:130] > # 	"SETPCAP",
	I0311 20:58:46.381475   43208 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0311 20:58:46.381479   43208 command_runner.go:130] > # 	"KILL",
	I0311 20:58:46.381484   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381492   43208 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0311 20:58:46.381500   43208 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0311 20:58:46.381507   43208 command_runner.go:130] > # add_inheritable_capabilities = false
	I0311 20:58:46.381518   43208 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0311 20:58:46.381527   43208 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0311 20:58:46.381533   43208 command_runner.go:130] > # default_sysctls = [
	I0311 20:58:46.381536   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381541   43208 command_runner.go:130] > # List of devices on the host that a
	I0311 20:58:46.381548   43208 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0311 20:58:46.381555   43208 command_runner.go:130] > # allowed_devices = [
	I0311 20:58:46.381558   43208 command_runner.go:130] > # 	"/dev/fuse",
	I0311 20:58:46.381564   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381568   43208 command_runner.go:130] > # List of additional devices. specified as
	I0311 20:58:46.381577   43208 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0311 20:58:46.381585   43208 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0311 20:58:46.381593   43208 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0311 20:58:46.381597   43208 command_runner.go:130] > # additional_devices = [
	I0311 20:58:46.381603   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381608   43208 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0311 20:58:46.381615   43208 command_runner.go:130] > # cdi_spec_dirs = [
	I0311 20:58:46.381619   43208 command_runner.go:130] > # 	"/etc/cdi",
	I0311 20:58:46.381625   43208 command_runner.go:130] > # 	"/var/run/cdi",
	I0311 20:58:46.381628   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381636   43208 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0311 20:58:46.381643   43208 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0311 20:58:46.381650   43208 command_runner.go:130] > # Defaults to false.
	I0311 20:58:46.381655   43208 command_runner.go:130] > # device_ownership_from_security_context = false
	I0311 20:58:46.381663   43208 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0311 20:58:46.381671   43208 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0311 20:58:46.381677   43208 command_runner.go:130] > # hooks_dir = [
	I0311 20:58:46.381681   43208 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0311 20:58:46.381691   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381699   43208 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0311 20:58:46.381707   43208 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0311 20:58:46.381715   43208 command_runner.go:130] > # its default mounts from the following two files:
	I0311 20:58:46.381720   43208 command_runner.go:130] > #
	I0311 20:58:46.381726   43208 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0311 20:58:46.381735   43208 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0311 20:58:46.381743   43208 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0311 20:58:46.381752   43208 command_runner.go:130] > #
	I0311 20:58:46.381761   43208 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0311 20:58:46.381767   43208 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0311 20:58:46.381776   43208 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0311 20:58:46.381783   43208 command_runner.go:130] > #      only add mounts it finds in this file.
	I0311 20:58:46.381786   43208 command_runner.go:130] > #
	I0311 20:58:46.381793   43208 command_runner.go:130] > # default_mounts_file = ""
	I0311 20:58:46.381798   43208 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0311 20:58:46.381806   43208 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0311 20:58:46.381813   43208 command_runner.go:130] > pids_limit = 1024
	I0311 20:58:46.381819   43208 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0311 20:58:46.381827   43208 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0311 20:58:46.381833   43208 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0311 20:58:46.381843   43208 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0311 20:58:46.381849   43208 command_runner.go:130] > # log_size_max = -1
	I0311 20:58:46.381856   43208 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0311 20:58:46.381863   43208 command_runner.go:130] > # log_to_journald = false
	I0311 20:58:46.381869   43208 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0311 20:58:46.381876   43208 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0311 20:58:46.381884   43208 command_runner.go:130] > # Path to directory for container attach sockets.
	I0311 20:58:46.381889   43208 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0311 20:58:46.381896   43208 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0311 20:58:46.381900   43208 command_runner.go:130] > # bind_mount_prefix = ""
	I0311 20:58:46.381906   43208 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0311 20:58:46.381912   43208 command_runner.go:130] > # read_only = false
	I0311 20:58:46.381918   43208 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0311 20:58:46.381927   43208 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0311 20:58:46.381933   43208 command_runner.go:130] > # live configuration reload.
	I0311 20:58:46.381937   43208 command_runner.go:130] > # log_level = "info"
	I0311 20:58:46.381945   43208 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0311 20:58:46.381952   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.381956   43208 command_runner.go:130] > # log_filter = ""
	I0311 20:58:46.381965   43208 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0311 20:58:46.381974   43208 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0311 20:58:46.381979   43208 command_runner.go:130] > # separated by comma.
	I0311 20:58:46.381987   43208 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0311 20:58:46.381997   43208 command_runner.go:130] > # uid_mappings = ""
	I0311 20:58:46.382005   43208 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0311 20:58:46.382013   43208 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0311 20:58:46.382020   43208 command_runner.go:130] > # separated by comma.
	I0311 20:58:46.382027   43208 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0311 20:58:46.382033   43208 command_runner.go:130] > # gid_mappings = ""
	I0311 20:58:46.382039   43208 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0311 20:58:46.382046   43208 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0311 20:58:46.382054   43208 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0311 20:58:46.382061   43208 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0311 20:58:46.382068   43208 command_runner.go:130] > # minimum_mappable_uid = -1
	I0311 20:58:46.382074   43208 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0311 20:58:46.382084   43208 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0311 20:58:46.382092   43208 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0311 20:58:46.382101   43208 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0311 20:58:46.382108   43208 command_runner.go:130] > # minimum_mappable_gid = -1
	I0311 20:58:46.382113   43208 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0311 20:58:46.382122   43208 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0311 20:58:46.382130   43208 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0311 20:58:46.382134   43208 command_runner.go:130] > # ctr_stop_timeout = 30
	I0311 20:58:46.382142   43208 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0311 20:58:46.382150   43208 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0311 20:58:46.382154   43208 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0311 20:58:46.382159   43208 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0311 20:58:46.382165   43208 command_runner.go:130] > drop_infra_ctr = false
	I0311 20:58:46.382171   43208 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0311 20:58:46.382179   43208 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0311 20:58:46.382189   43208 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0311 20:58:46.382195   43208 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0311 20:58:46.382202   43208 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0311 20:58:46.382209   43208 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0311 20:58:46.382215   43208 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0311 20:58:46.382221   43208 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0311 20:58:46.382225   43208 command_runner.go:130] > # shared_cpuset = ""
	I0311 20:58:46.382233   43208 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0311 20:58:46.382241   43208 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0311 20:58:46.382249   43208 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0311 20:58:46.382258   43208 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0311 20:58:46.382264   43208 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0311 20:58:46.382270   43208 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0311 20:58:46.382278   43208 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0311 20:58:46.382283   43208 command_runner.go:130] > # enable_criu_support = false
	I0311 20:58:46.382287   43208 command_runner.go:130] > # Enable/disable the generation of the container,
	I0311 20:58:46.382296   43208 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0311 20:58:46.382303   43208 command_runner.go:130] > # enable_pod_events = false
	I0311 20:58:46.382308   43208 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0311 20:58:46.382316   43208 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0311 20:58:46.382322   43208 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0311 20:58:46.382329   43208 command_runner.go:130] > # default_runtime = "runc"
	I0311 20:58:46.382338   43208 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0311 20:58:46.382348   43208 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0311 20:58:46.382358   43208 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0311 20:58:46.382366   43208 command_runner.go:130] > # creation as a file is not desired either.
	I0311 20:58:46.382373   43208 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0311 20:58:46.382380   43208 command_runner.go:130] > # the hostname is being managed dynamically.
	I0311 20:58:46.382384   43208 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0311 20:58:46.382390   43208 command_runner.go:130] > # ]
	I0311 20:58:46.382396   43208 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0311 20:58:46.382404   43208 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0311 20:58:46.382410   43208 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0311 20:58:46.382417   43208 command_runner.go:130] > # Each entry in the table should follow the format:
	I0311 20:58:46.382420   43208 command_runner.go:130] > #
	I0311 20:58:46.382425   43208 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0311 20:58:46.382432   43208 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0311 20:58:46.382436   43208 command_runner.go:130] > # runtime_type = "oci"
	I0311 20:58:46.382499   43208 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0311 20:58:46.382510   43208 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0311 20:58:46.382520   43208 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0311 20:58:46.382526   43208 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0311 20:58:46.382533   43208 command_runner.go:130] > # monitor_env = []
	I0311 20:58:46.382537   43208 command_runner.go:130] > # privileged_without_host_devices = false
	I0311 20:58:46.382544   43208 command_runner.go:130] > # allowed_annotations = []
	I0311 20:58:46.382556   43208 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0311 20:58:46.382563   43208 command_runner.go:130] > # Where:
	I0311 20:58:46.382568   43208 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0311 20:58:46.382577   43208 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0311 20:58:46.382585   43208 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0311 20:58:46.382593   43208 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0311 20:58:46.382599   43208 command_runner.go:130] > #   in $PATH.
	I0311 20:58:46.382605   43208 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0311 20:58:46.382612   43208 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0311 20:58:46.382620   43208 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0311 20:58:46.382626   43208 command_runner.go:130] > #   state.
	I0311 20:58:46.382632   43208 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0311 20:58:46.382640   43208 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0311 20:58:46.382649   43208 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0311 20:58:46.382656   43208 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0311 20:58:46.382664   43208 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0311 20:58:46.382673   43208 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0311 20:58:46.382680   43208 command_runner.go:130] > #   The currently recognized values are:
	I0311 20:58:46.382691   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0311 20:58:46.382700   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0311 20:58:46.382708   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0311 20:58:46.382716   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0311 20:58:46.382726   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0311 20:58:46.382735   43208 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0311 20:58:46.382744   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0311 20:58:46.382752   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0311 20:58:46.382760   43208 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0311 20:58:46.382765   43208 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0311 20:58:46.382772   43208 command_runner.go:130] > #   deprecated option "conmon".
	I0311 20:58:46.382778   43208 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0311 20:58:46.382786   43208 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0311 20:58:46.382792   43208 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0311 20:58:46.382799   43208 command_runner.go:130] > #   should be moved to the container's cgroup
	I0311 20:58:46.382805   43208 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0311 20:58:46.382812   43208 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0311 20:58:46.382818   43208 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0311 20:58:46.382830   43208 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0311 20:58:46.382835   43208 command_runner.go:130] > #
	I0311 20:58:46.382840   43208 command_runner.go:130] > # Using the seccomp notifier feature:
	I0311 20:58:46.382846   43208 command_runner.go:130] > #
	I0311 20:58:46.382851   43208 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0311 20:58:46.382859   43208 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0311 20:58:46.382862   43208 command_runner.go:130] > #
	I0311 20:58:46.382870   43208 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0311 20:58:46.382876   43208 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0311 20:58:46.382882   43208 command_runner.go:130] > #
	I0311 20:58:46.382887   43208 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0311 20:58:46.382893   43208 command_runner.go:130] > # feature.
	I0311 20:58:46.382897   43208 command_runner.go:130] > #
	I0311 20:58:46.382905   43208 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0311 20:58:46.382911   43208 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0311 20:58:46.382919   43208 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0311 20:58:46.382927   43208 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0311 20:58:46.382934   43208 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0311 20:58:46.382939   43208 command_runner.go:130] > #
	I0311 20:58:46.382944   43208 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0311 20:58:46.382952   43208 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0311 20:58:46.382955   43208 command_runner.go:130] > #
	I0311 20:58:46.382963   43208 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0311 20:58:46.382968   43208 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0311 20:58:46.382974   43208 command_runner.go:130] > #
	I0311 20:58:46.382980   43208 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0311 20:58:46.382988   43208 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0311 20:58:46.382994   43208 command_runner.go:130] > # limitation.
	I0311 20:58:46.382998   43208 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0311 20:58:46.383005   43208 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0311 20:58:46.383009   43208 command_runner.go:130] > runtime_type = "oci"
	I0311 20:58:46.383015   43208 command_runner.go:130] > runtime_root = "/run/runc"
	I0311 20:58:46.383020   43208 command_runner.go:130] > runtime_config_path = ""
	I0311 20:58:46.383027   43208 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0311 20:58:46.383030   43208 command_runner.go:130] > monitor_cgroup = "pod"
	I0311 20:58:46.383037   43208 command_runner.go:130] > monitor_exec_cgroup = ""
	I0311 20:58:46.383044   43208 command_runner.go:130] > monitor_env = [
	I0311 20:58:46.383052   43208 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0311 20:58:46.383058   43208 command_runner.go:130] > ]
	I0311 20:58:46.383062   43208 command_runner.go:130] > privileged_without_host_devices = false
	I0311 20:58:46.383071   43208 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0311 20:58:46.383078   43208 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0311 20:58:46.383084   43208 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0311 20:58:46.383093   43208 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0311 20:58:46.383103   43208 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0311 20:58:46.383111   43208 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0311 20:58:46.383122   43208 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0311 20:58:46.383132   43208 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0311 20:58:46.383139   43208 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0311 20:58:46.383149   43208 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0311 20:58:46.383153   43208 command_runner.go:130] > # Example:
	I0311 20:58:46.383161   43208 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0311 20:58:46.383166   43208 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0311 20:58:46.383173   43208 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0311 20:58:46.383178   43208 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0311 20:58:46.383181   43208 command_runner.go:130] > # cpuset = 0
	I0311 20:58:46.383185   43208 command_runner.go:130] > # cpushares = "0-1"
	I0311 20:58:46.383188   43208 command_runner.go:130] > # Where:
	I0311 20:58:46.383192   43208 command_runner.go:130] > # The workload name is workload-type.
	I0311 20:58:46.383198   43208 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0311 20:58:46.383202   43208 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0311 20:58:46.383207   43208 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0311 20:58:46.383214   43208 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0311 20:58:46.383219   43208 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0311 20:58:46.383223   43208 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0311 20:58:46.383229   43208 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0311 20:58:46.383233   43208 command_runner.go:130] > # Default value is set to true
	I0311 20:58:46.383237   43208 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0311 20:58:46.383242   43208 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0311 20:58:46.383246   43208 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0311 20:58:46.383250   43208 command_runner.go:130] > # Default value is set to 'false'
	I0311 20:58:46.383254   43208 command_runner.go:130] > # disable_hostport_mapping = false
	I0311 20:58:46.383264   43208 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0311 20:58:46.383267   43208 command_runner.go:130] > #
	I0311 20:58:46.383272   43208 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0311 20:58:46.383278   43208 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0311 20:58:46.383283   43208 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0311 20:58:46.383289   43208 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0311 20:58:46.383293   43208 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0311 20:58:46.383297   43208 command_runner.go:130] > [crio.image]
	I0311 20:58:46.383302   43208 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0311 20:58:46.383306   43208 command_runner.go:130] > # default_transport = "docker://"
	I0311 20:58:46.383311   43208 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0311 20:58:46.383317   43208 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0311 20:58:46.383321   43208 command_runner.go:130] > # global_auth_file = ""
	I0311 20:58:46.383325   43208 command_runner.go:130] > # The image used to instantiate infra containers.
	I0311 20:58:46.383330   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.383334   43208 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0311 20:58:46.383340   43208 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0311 20:58:46.383345   43208 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0311 20:58:46.383353   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.383357   43208 command_runner.go:130] > # pause_image_auth_file = ""
	I0311 20:58:46.383365   43208 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0311 20:58:46.383373   43208 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0311 20:58:46.383379   43208 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0311 20:58:46.383386   43208 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0311 20:58:46.383390   43208 command_runner.go:130] > # pause_command = "/pause"
	I0311 20:58:46.383397   43208 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0311 20:58:46.383406   43208 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0311 20:58:46.383412   43208 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0311 20:58:46.383420   43208 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0311 20:58:46.383427   43208 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0311 20:58:46.383436   43208 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0311 20:58:46.383441   43208 command_runner.go:130] > # pinned_images = [
	I0311 20:58:46.383445   43208 command_runner.go:130] > # ]
	I0311 20:58:46.383453   43208 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0311 20:58:46.383461   43208 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0311 20:58:46.383467   43208 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0311 20:58:46.383479   43208 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0311 20:58:46.383487   43208 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0311 20:58:46.383493   43208 command_runner.go:130] > # signature_policy = ""
	I0311 20:58:46.383499   43208 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0311 20:58:46.383508   43208 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0311 20:58:46.383516   43208 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0311 20:58:46.383524   43208 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0311 20:58:46.383530   43208 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0311 20:58:46.383538   43208 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0311 20:58:46.383543   43208 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0311 20:58:46.383551   43208 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0311 20:58:46.383558   43208 command_runner.go:130] > # changing them here.
	I0311 20:58:46.383564   43208 command_runner.go:130] > # insecure_registries = [
	I0311 20:58:46.383568   43208 command_runner.go:130] > # ]
	I0311 20:58:46.383576   43208 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0311 20:58:46.383584   43208 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0311 20:58:46.383588   43208 command_runner.go:130] > # image_volumes = "mkdir"
	I0311 20:58:46.383595   43208 command_runner.go:130] > # Temporary directory to use for storing big files
	I0311 20:58:46.383599   43208 command_runner.go:130] > # big_files_temporary_dir = ""
	I0311 20:58:46.383607   43208 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0311 20:58:46.383611   43208 command_runner.go:130] > # CNI plugins.
	I0311 20:58:46.383615   43208 command_runner.go:130] > [crio.network]
	I0311 20:58:46.383623   43208 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0311 20:58:46.383628   43208 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0311 20:58:46.383635   43208 command_runner.go:130] > # cni_default_network = ""
	I0311 20:58:46.383640   43208 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0311 20:58:46.383646   43208 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0311 20:58:46.383652   43208 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0311 20:58:46.383658   43208 command_runner.go:130] > # plugin_dirs = [
	I0311 20:58:46.383662   43208 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0311 20:58:46.383667   43208 command_runner.go:130] > # ]
	I0311 20:58:46.383673   43208 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0311 20:58:46.383679   43208 command_runner.go:130] > [crio.metrics]
	I0311 20:58:46.383684   43208 command_runner.go:130] > # Globally enable or disable metrics support.
	I0311 20:58:46.383692   43208 command_runner.go:130] > enable_metrics = true
	I0311 20:58:46.383696   43208 command_runner.go:130] > # Specify enabled metrics collectors.
	I0311 20:58:46.383710   43208 command_runner.go:130] > # Per default all metrics are enabled.
	I0311 20:58:46.383718   43208 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0311 20:58:46.383726   43208 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0311 20:58:46.383734   43208 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0311 20:58:46.383738   43208 command_runner.go:130] > # metrics_collectors = [
	I0311 20:58:46.383744   43208 command_runner.go:130] > # 	"operations",
	I0311 20:58:46.383748   43208 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0311 20:58:46.383755   43208 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0311 20:58:46.383759   43208 command_runner.go:130] > # 	"operations_errors",
	I0311 20:58:46.383766   43208 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0311 20:58:46.383770   43208 command_runner.go:130] > # 	"image_pulls_by_name",
	I0311 20:58:46.383777   43208 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0311 20:58:46.383781   43208 command_runner.go:130] > # 	"image_pulls_failures",
	I0311 20:58:46.383785   43208 command_runner.go:130] > # 	"image_pulls_successes",
	I0311 20:58:46.383790   43208 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0311 20:58:46.383794   43208 command_runner.go:130] > # 	"image_layer_reuse",
	I0311 20:58:46.383800   43208 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0311 20:58:46.383804   43208 command_runner.go:130] > # 	"containers_oom_total",
	I0311 20:58:46.383810   43208 command_runner.go:130] > # 	"containers_oom",
	I0311 20:58:46.383815   43208 command_runner.go:130] > # 	"processes_defunct",
	I0311 20:58:46.383834   43208 command_runner.go:130] > # 	"operations_total",
	I0311 20:58:46.383838   43208 command_runner.go:130] > # 	"operations_latency_seconds",
	I0311 20:58:46.383845   43208 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0311 20:58:46.383849   43208 command_runner.go:130] > # 	"operations_errors_total",
	I0311 20:58:46.383856   43208 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0311 20:58:46.383861   43208 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0311 20:58:46.383867   43208 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0311 20:58:46.383871   43208 command_runner.go:130] > # 	"image_pulls_success_total",
	I0311 20:58:46.383877   43208 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0311 20:58:46.383881   43208 command_runner.go:130] > # 	"containers_oom_count_total",
	I0311 20:58:46.383886   43208 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0311 20:58:46.383892   43208 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0311 20:58:46.383896   43208 command_runner.go:130] > # ]
	I0311 20:58:46.383904   43208 command_runner.go:130] > # The port on which the metrics server will listen.
	I0311 20:58:46.383908   43208 command_runner.go:130] > # metrics_port = 9090
	I0311 20:58:46.383915   43208 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0311 20:58:46.383924   43208 command_runner.go:130] > # metrics_socket = ""
	I0311 20:58:46.383931   43208 command_runner.go:130] > # The certificate for the secure metrics server.
	I0311 20:58:46.383937   43208 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0311 20:58:46.383947   43208 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0311 20:58:46.383954   43208 command_runner.go:130] > # certificate on any modification event.
	I0311 20:58:46.383957   43208 command_runner.go:130] > # metrics_cert = ""
	I0311 20:58:46.383965   43208 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0311 20:58:46.383969   43208 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0311 20:58:46.383975   43208 command_runner.go:130] > # metrics_key = ""
	I0311 20:58:46.383981   43208 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0311 20:58:46.383987   43208 command_runner.go:130] > [crio.tracing]
	I0311 20:58:46.383992   43208 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0311 20:58:46.383999   43208 command_runner.go:130] > # enable_tracing = false
	I0311 20:58:46.384008   43208 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0311 20:58:46.384015   43208 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0311 20:58:46.384021   43208 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0311 20:58:46.384029   43208 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0311 20:58:46.384033   43208 command_runner.go:130] > # CRI-O NRI configuration.
	I0311 20:58:46.384038   43208 command_runner.go:130] > [crio.nri]
	I0311 20:58:46.384043   43208 command_runner.go:130] > # Globally enable or disable NRI.
	I0311 20:58:46.384048   43208 command_runner.go:130] > # enable_nri = false
	I0311 20:58:46.384052   43208 command_runner.go:130] > # NRI socket to listen on.
	I0311 20:58:46.384056   43208 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0311 20:58:46.384063   43208 command_runner.go:130] > # NRI plugin directory to use.
	I0311 20:58:46.384068   43208 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0311 20:58:46.384075   43208 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0311 20:58:46.384079   43208 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0311 20:58:46.384087   43208 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0311 20:58:46.384091   43208 command_runner.go:130] > # nri_disable_connections = false
	I0311 20:58:46.384098   43208 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0311 20:58:46.384103   43208 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0311 20:58:46.384114   43208 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0311 20:58:46.384121   43208 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0311 20:58:46.384127   43208 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0311 20:58:46.384134   43208 command_runner.go:130] > [crio.stats]
	I0311 20:58:46.384139   43208 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0311 20:58:46.384151   43208 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0311 20:58:46.384158   43208 command_runner.go:130] > # stats_collection_period = 0
	I0311 20:58:46.384306   43208 cni.go:84] Creating CNI manager for ""
	I0311 20:58:46.384319   43208 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0311 20:58:46.384328   43208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 20:58:46.384345   43208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-232100 NodeName:multinode-232100 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 20:58:46.384460   43208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-232100"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 20:58:46.384519   43208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:58:46.395037   43208 command_runner.go:130] > kubeadm
	I0311 20:58:46.395055   43208 command_runner.go:130] > kubectl
	I0311 20:58:46.395060   43208 command_runner.go:130] > kubelet
	I0311 20:58:46.395352   43208 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 20:58:46.395403   43208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 20:58:46.405233   43208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0311 20:58:46.423263   43208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:58:46.441969   43208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0311 20:58:46.459603   43208 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0311 20:58:46.463430   43208 command_runner.go:130] > 192.168.39.134	control-plane.minikube.internal
	I0311 20:58:46.463596   43208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:58:46.606718   43208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:58:46.623414   43208 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100 for IP: 192.168.39.134
	I0311 20:58:46.623435   43208 certs.go:194] generating shared ca certs ...
	I0311 20:58:46.623454   43208 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:58:46.623599   43208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:58:46.623673   43208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:58:46.623688   43208 certs.go:256] generating profile certs ...
	I0311 20:58:46.623855   43208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/client.key
	I0311 20:58:46.623987   43208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.key.81468c01
	I0311 20:58:46.624089   43208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.key
	I0311 20:58:46.624107   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:58:46.624128   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:58:46.624148   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:58:46.624173   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:58:46.624203   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:58:46.624226   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:58:46.624256   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:58:46.624309   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:58:46.624383   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:58:46.624432   43208 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:58:46.624447   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:58:46.624482   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:58:46.624523   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:58:46.624558   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:58:46.624624   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:58:46.624667   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:58:46.624693   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:46.624725   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:58:46.625393   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:58:46.653405   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:58:46.681479   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:58:46.710610   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:58:46.739152   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 20:58:46.767216   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 20:58:46.795692   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:58:46.822161   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:58:46.848177   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:58:46.873810   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:58:46.924491   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:58:46.952844   43208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 20:58:46.971673   43208 ssh_runner.go:195] Run: openssl version
	I0311 20:58:46.977932   43208 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0311 20:58:46.978218   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:58:46.989932   43208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:58:46.994637   43208 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:58:46.994665   43208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:58:46.994699   43208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:58:47.000385   43208 command_runner.go:130] > 3ec20f2e
	I0311 20:58:47.000606   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:58:47.010285   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:58:47.024106   43208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:47.028811   43208 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:47.028832   43208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:47.028865   43208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:47.034877   43208 command_runner.go:130] > b5213941
	I0311 20:58:47.034942   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:58:47.045164   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:58:47.056701   43208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:58:47.061461   43208 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:58:47.061483   43208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:58:47.061508   43208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:58:47.067421   43208 command_runner.go:130] > 51391683
	I0311 20:58:47.067468   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:58:47.077210   43208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:58:47.082673   43208 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:58:47.082692   43208 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0311 20:58:47.082701   43208 command_runner.go:130] > Device: 253,1	Inode: 3150397     Links: 1
	I0311 20:58:47.082712   43208 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0311 20:58:47.082726   43208 command_runner.go:130] > Access: 2024-03-11 20:52:36.245703572 +0000
	I0311 20:58:47.082737   43208 command_runner.go:130] > Modify: 2024-03-11 20:52:36.245703572 +0000
	I0311 20:58:47.082749   43208 command_runner.go:130] > Change: 2024-03-11 20:52:36.245703572 +0000
	I0311 20:58:47.082758   43208 command_runner.go:130] >  Birth: 2024-03-11 20:52:36.245703572 +0000
	I0311 20:58:47.082816   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 20:58:47.088799   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.088853   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 20:58:47.097086   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.097148   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 20:58:47.102916   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.102968   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 20:58:47.108663   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.108726   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 20:58:47.114443   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.114500   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 20:58:47.120349   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.120491   43208 kubeadm.go:391] StartCluster: {Name:multinode-232100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-232100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:58:47.120593   43208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 20:58:47.120623   43208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 20:58:47.162222   43208 command_runner.go:130] > 60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4
	I0311 20:58:47.162280   43208 command_runner.go:130] > 93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4
	I0311 20:58:47.162501   43208 command_runner.go:130] > f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67
	I0311 20:58:47.162579   43208 command_runner.go:130] > 54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce
	I0311 20:58:47.162715   43208 command_runner.go:130] > d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae
	I0311 20:58:47.162758   43208 command_runner.go:130] > 1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9
	I0311 20:58:47.162883   43208 command_runner.go:130] > d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73
	I0311 20:58:47.163094   43208 command_runner.go:130] > bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e
	I0311 20:58:47.164514   43208 cri.go:89] found id: "60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4"
	I0311 20:58:47.164532   43208 cri.go:89] found id: "93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4"
	I0311 20:58:47.164537   43208 cri.go:89] found id: "f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67"
	I0311 20:58:47.164541   43208 cri.go:89] found id: "54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce"
	I0311 20:58:47.164545   43208 cri.go:89] found id: "d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae"
	I0311 20:58:47.164549   43208 cri.go:89] found id: "1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9"
	I0311 20:58:47.164553   43208 cri.go:89] found id: "d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73"
	I0311 20:58:47.164557   43208 cri.go:89] found id: "bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e"
	I0311 20:58:47.164561   43208 cri.go:89] found id: ""
	I0311 20:58:47.164605   43208 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.901735817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710190817901713463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=986c9f71-bed4-4de8-b10a-b5875f3ae722 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.902811968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33689b6a-f5c7-4ce9-81ca-4682036a3261 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.902896449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33689b6a-f5c7-4ce9-81ca-4682036a3261 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.903737862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c36d4a28cda0a23dce7dbdcbf0163612922d562817099d9f33ba4b885c952e2,PodSandboxId:d893dd416e46f624b92d9c86301cf2889aeeb8671c5b54362cad8b45aa03a3ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710190767903938656,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380,PodSandboxId:24b9cb8b3e7691ba85d86bf40ceb239d72cd8c4cfd499997d5d279c6c752c475,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710190734396907526,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd,PodSandboxId:7f4eb2a24247567189b58dd33d0e131343e06bbe6c1f4cc60da6afa79cc3b962,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710190734377851724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4e24a44a9d45bd39961e44a7307731ca971e7fdca4afd3c61cd8345f63be0b,PodSandboxId:035f90aaa5af743f4a5b7d86b49afd753bb5bbcb04948ae5c29fd4560cc5f4cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710190734268093040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},A
nnotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01,PodSandboxId:9df6e16d3753b9c8d229af005789555fd9232a7124a8ec7fb8bf0dbdb4846704,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710190734233725718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f,PodSandboxId:3e2f4ddb961f49ff4b984f5a2d9d6d408448b1612be896ba3a8e19ef3d2aa779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710190729352546592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61,PodSandboxId:96bc33e30251ec5824611091301165a9dfada84b03a0faa3ff58bd9b546a6331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710190729315390243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62,PodSandboxId:99ea1b5a303ae5f127d4c80ca9967c4b3b09a8def10a15d805a82bb49faf1bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710190729257981147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504,PodSandboxId:47925952096dea5fbc001d3041625e0aa99ae060a78ee8dfa8edd6c9dc95737c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710190729214367570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29dc13af7eab8845490b1e01d86973909a1244b41f9360951f5eea7f2bfa7ab,PodSandboxId:7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710190425647848479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4,PodSandboxId:e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710190383291987417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},Annotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4,PodSandboxId:62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710190383295096641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67,PodSandboxId:71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710190381547505184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce,PodSandboxId:f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710190378853642969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.kubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae,PodSandboxId:3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710190360438158106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9,PodSandboxId:e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710190360380490505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e,PodSandboxId:1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710190360326914328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661
d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73,PodSandboxId:7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710190360330675507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations
:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33689b6a-f5c7-4ce9-81ca-4682036a3261 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.960446383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9f71a6a-9098-4cde-9240-af0bb722ea60 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.960553061Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9f71a6a-9098-4cde-9240-af0bb722ea60 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.962976179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a058750-556b-4208-acb1-591a56356994 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.963585516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710190817963558879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a058750-556b-4208-acb1-591a56356994 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.964475056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ec5bc4f-974b-4316-83f1-765e900b52ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.964536666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ec5bc4f-974b-4316-83f1-765e900b52ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:17 multinode-232100 crio[2889]: time="2024-03-11 21:00:17.965142106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c36d4a28cda0a23dce7dbdcbf0163612922d562817099d9f33ba4b885c952e2,PodSandboxId:d893dd416e46f624b92d9c86301cf2889aeeb8671c5b54362cad8b45aa03a3ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710190767903938656,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380,PodSandboxId:24b9cb8b3e7691ba85d86bf40ceb239d72cd8c4cfd499997d5d279c6c752c475,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710190734396907526,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd,PodSandboxId:7f4eb2a24247567189b58dd33d0e131343e06bbe6c1f4cc60da6afa79cc3b962,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710190734377851724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4e24a44a9d45bd39961e44a7307731ca971e7fdca4afd3c61cd8345f63be0b,PodSandboxId:035f90aaa5af743f4a5b7d86b49afd753bb5bbcb04948ae5c29fd4560cc5f4cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710190734268093040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},A
nnotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01,PodSandboxId:9df6e16d3753b9c8d229af005789555fd9232a7124a8ec7fb8bf0dbdb4846704,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710190734233725718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f,PodSandboxId:3e2f4ddb961f49ff4b984f5a2d9d6d408448b1612be896ba3a8e19ef3d2aa779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710190729352546592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61,PodSandboxId:96bc33e30251ec5824611091301165a9dfada84b03a0faa3ff58bd9b546a6331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710190729315390243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62,PodSandboxId:99ea1b5a303ae5f127d4c80ca9967c4b3b09a8def10a15d805a82bb49faf1bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710190729257981147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504,PodSandboxId:47925952096dea5fbc001d3041625e0aa99ae060a78ee8dfa8edd6c9dc95737c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710190729214367570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29dc13af7eab8845490b1e01d86973909a1244b41f9360951f5eea7f2bfa7ab,PodSandboxId:7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710190425647848479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4,PodSandboxId:e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710190383291987417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},Annotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4,PodSandboxId:62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710190383295096641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67,PodSandboxId:71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710190381547505184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce,PodSandboxId:f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710190378853642969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.kubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae,PodSandboxId:3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710190360438158106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9,PodSandboxId:e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710190360380490505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e,PodSandboxId:1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710190360326914328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661
d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73,PodSandboxId:7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710190360330675507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations
:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ec5bc4f-974b-4316-83f1-765e900b52ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.014678885Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4624a90-708e-459e-aad6-6c6f8354dbaa name=/runtime.v1.RuntimeService/Version
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.014780554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4624a90-708e-459e-aad6-6c6f8354dbaa name=/runtime.v1.RuntimeService/Version
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.016858372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=766a90fd-149f-4f73-9508-2e2c1309c793 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.017630361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710190818017602706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=766a90fd-149f-4f73-9508-2e2c1309c793 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.018419145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd041077-3a65-4e8b-9bde-b4ea231749c3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.018506423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd041077-3a65-4e8b-9bde-b4ea231749c3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.018983044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c36d4a28cda0a23dce7dbdcbf0163612922d562817099d9f33ba4b885c952e2,PodSandboxId:d893dd416e46f624b92d9c86301cf2889aeeb8671c5b54362cad8b45aa03a3ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710190767903938656,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380,PodSandboxId:24b9cb8b3e7691ba85d86bf40ceb239d72cd8c4cfd499997d5d279c6c752c475,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710190734396907526,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd,PodSandboxId:7f4eb2a24247567189b58dd33d0e131343e06bbe6c1f4cc60da6afa79cc3b962,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710190734377851724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4e24a44a9d45bd39961e44a7307731ca971e7fdca4afd3c61cd8345f63be0b,PodSandboxId:035f90aaa5af743f4a5b7d86b49afd753bb5bbcb04948ae5c29fd4560cc5f4cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710190734268093040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},A
nnotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01,PodSandboxId:9df6e16d3753b9c8d229af005789555fd9232a7124a8ec7fb8bf0dbdb4846704,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710190734233725718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f,PodSandboxId:3e2f4ddb961f49ff4b984f5a2d9d6d408448b1612be896ba3a8e19ef3d2aa779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710190729352546592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61,PodSandboxId:96bc33e30251ec5824611091301165a9dfada84b03a0faa3ff58bd9b546a6331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710190729315390243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62,PodSandboxId:99ea1b5a303ae5f127d4c80ca9967c4b3b09a8def10a15d805a82bb49faf1bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710190729257981147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504,PodSandboxId:47925952096dea5fbc001d3041625e0aa99ae060a78ee8dfa8edd6c9dc95737c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710190729214367570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29dc13af7eab8845490b1e01d86973909a1244b41f9360951f5eea7f2bfa7ab,PodSandboxId:7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710190425647848479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4,PodSandboxId:e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710190383291987417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},Annotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4,PodSandboxId:62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710190383295096641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67,PodSandboxId:71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710190381547505184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce,PodSandboxId:f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710190378853642969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.kubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae,PodSandboxId:3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710190360438158106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9,PodSandboxId:e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710190360380490505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e,PodSandboxId:1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710190360326914328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661
d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73,PodSandboxId:7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710190360330675507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations
:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd041077-3a65-4e8b-9bde-b4ea231749c3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.065557946Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03be1172-b8f9-4e84-bf35-5b3352aefb5f name=/runtime.v1.RuntimeService/Version
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.065626133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03be1172-b8f9-4e84-bf35-5b3352aefb5f name=/runtime.v1.RuntimeService/Version
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.067174997Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22a166e3-1c41-4da9-bebc-c166e703b60d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.067606259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710190818067585600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22a166e3-1c41-4da9-bebc-c166e703b60d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.068138837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ddd3b6b-daa6-43b1-97d9-4dc220f41e26 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.068222448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ddd3b6b-daa6-43b1-97d9-4dc220f41e26 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:00:18 multinode-232100 crio[2889]: time="2024-03-11 21:00:18.068577295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c36d4a28cda0a23dce7dbdcbf0163612922d562817099d9f33ba4b885c952e2,PodSandboxId:d893dd416e46f624b92d9c86301cf2889aeeb8671c5b54362cad8b45aa03a3ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710190767903938656,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380,PodSandboxId:24b9cb8b3e7691ba85d86bf40ceb239d72cd8c4cfd499997d5d279c6c752c475,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710190734396907526,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd,PodSandboxId:7f4eb2a24247567189b58dd33d0e131343e06bbe6c1f4cc60da6afa79cc3b962,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710190734377851724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4e24a44a9d45bd39961e44a7307731ca971e7fdca4afd3c61cd8345f63be0b,PodSandboxId:035f90aaa5af743f4a5b7d86b49afd753bb5bbcb04948ae5c29fd4560cc5f4cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710190734268093040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},A
nnotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01,PodSandboxId:9df6e16d3753b9c8d229af005789555fd9232a7124a8ec7fb8bf0dbdb4846704,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710190734233725718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f,PodSandboxId:3e2f4ddb961f49ff4b984f5a2d9d6d408448b1612be896ba3a8e19ef3d2aa779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710190729352546592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61,PodSandboxId:96bc33e30251ec5824611091301165a9dfada84b03a0faa3ff58bd9b546a6331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710190729315390243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62,PodSandboxId:99ea1b5a303ae5f127d4c80ca9967c4b3b09a8def10a15d805a82bb49faf1bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710190729257981147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504,PodSandboxId:47925952096dea5fbc001d3041625e0aa99ae060a78ee8dfa8edd6c9dc95737c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710190729214367570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29dc13af7eab8845490b1e01d86973909a1244b41f9360951f5eea7f2bfa7ab,PodSandboxId:7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710190425647848479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4,PodSandboxId:e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710190383291987417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},Annotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4,PodSandboxId:62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710190383295096641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67,PodSandboxId:71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710190381547505184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce,PodSandboxId:f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710190378853642969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.kubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae,PodSandboxId:3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710190360438158106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9,PodSandboxId:e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710190360380490505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e,PodSandboxId:1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710190360326914328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661
d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73,PodSandboxId:7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710190360330675507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations
:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ddd3b6b-daa6-43b1-97d9-4dc220f41e26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6c36d4a28cda0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      50 seconds ago       Running             busybox                   1                   d893dd416e46f       busybox-5b5d89c9d6-4hsnz
	397e799f82d3b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   24b9cb8b3e769       kindnet-glj55
	5635f2ddd04d7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   7f4eb2a242475       coredns-5dd5756b68-5mg4g
	5e4e24a44a9d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   035f90aaa5af7       storage-provisioner
	2a9ab4b51ae26       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   9df6e16d3753b       kube-proxy-zdkdk
	da33624f7e932       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   3e2f4ddb961f4       kube-scheduler-multinode-232100
	a2f1035ad4acd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   96bc33e30251e       etcd-multinode-232100
	93897777952ec       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   99ea1b5a303ae       kube-apiserver-multinode-232100
	9a946faba1cc5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   47925952096de       kube-controller-manager-multinode-232100
	a29dc13af7eab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   7983479821d10       busybox-5b5d89c9d6-4hsnz
	60c4a4e869509       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   62bf0ad89abce       coredns-5dd5756b68-5mg4g
	93cc20c6fde7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   e7fd5611a7509       storage-provisioner
	f48ce4493a06c       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   71e18232ae358       kindnet-glj55
	54c8e9ef07bcb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   f3be5dce7a231       kube-proxy-zdkdk
	d9bb108f87baf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   3e7917fa7ecc6       kube-scheduler-multinode-232100
	1ad2090b379ff       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   e7db90ecbf027       kube-controller-manager-multinode-232100
	d399b5316450e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   7e41c8b42456d       kube-apiserver-multinode-232100
	bc8d4f35d2f61       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   1ca9304474644       etcd-multinode-232100
	
	
	==> coredns [5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51308 - 64762 "HINFO IN 2907767183170153192.861951351699720548. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011178125s
	
	
	==> coredns [60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4] <==
	[INFO] 10.244.1.2:57034 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002033813s
	[INFO] 10.244.1.2:50664 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000197006s
	[INFO] 10.244.1.2:34648 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085765s
	[INFO] 10.244.1.2:46501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001280525s
	[INFO] 10.244.1.2:51451 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159813s
	[INFO] 10.244.1.2:35952 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068758s
	[INFO] 10.244.1.2:51667 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126112s
	[INFO] 10.244.0.3:42139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112438s
	[INFO] 10.244.0.3:49729 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078946s
	[INFO] 10.244.0.3:50607 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075093s
	[INFO] 10.244.0.3:33038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059279s
	[INFO] 10.244.1.2:33132 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152907s
	[INFO] 10.244.1.2:59285 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000322266s
	[INFO] 10.244.1.2:49834 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110922s
	[INFO] 10.244.1.2:45776 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086518s
	[INFO] 10.244.0.3:47399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102894s
	[INFO] 10.244.0.3:41422 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018856s
	[INFO] 10.244.0.3:40403 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118524s
	[INFO] 10.244.0.3:52549 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102736s
	[INFO] 10.244.1.2:39878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188692s
	[INFO] 10.244.1.2:55958 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140596s
	[INFO] 10.244.1.2:39867 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098712s
	[INFO] 10.244.1.2:54626 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010438s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-232100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-232100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=multinode-232100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T20_52_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-232100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:00:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:58:52 +0000   Mon, 11 Mar 2024 20:52:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:58:52 +0000   Mon, 11 Mar 2024 20:52:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:58:52 +0000   Mon, 11 Mar 2024 20:52:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:58:52 +0000   Mon, 11 Mar 2024 20:53:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    multinode-232100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 02bc41b9a5d647028d026e2dfd08c841
	  System UUID:                02bc41b9-a5d6-4702-8d02-6e2dfd08c841
	  Boot ID:                    b4da8b15-bbef-4963-982e-9fb47ed83221
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4hsnz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 coredns-5dd5756b68-5mg4g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m19s
	  kube-system                 etcd-multinode-232100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m32s
	  kube-system                 kindnet-glj55                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m20s
	  kube-system                 kube-apiserver-multinode-232100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-controller-manager-multinode-232100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-proxy-zdkdk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-scheduler-multinode-232100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m19s                  kube-proxy       
	  Normal  Starting                 83s                    kube-proxy       
	  Normal  Starting                 7m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m39s (x8 over 7m39s)  kubelet          Node multinode-232100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s (x8 over 7m39s)  kubelet          Node multinode-232100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m39s (x7 over 7m39s)  kubelet          Node multinode-232100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m32s                  kubelet          Node multinode-232100 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m32s                  kubelet          Node multinode-232100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s                  kubelet          Node multinode-232100 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m32s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m20s                  node-controller  Node multinode-232100 event: Registered Node multinode-232100 in Controller
	  Normal  NodeReady                7m16s                  kubelet          Node multinode-232100 status is now: NodeReady
	  Normal  Starting                 90s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)      kubelet          Node multinode-232100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)      kubelet          Node multinode-232100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)      kubelet          Node multinode-232100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                    node-controller  Node multinode-232100 event: Registered Node multinode-232100 in Controller
	
	
	Name:               multinode-232100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-232100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=multinode-232100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_59_34_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:59:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-232100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:00:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:00:04 +0000   Mon, 11 Mar 2024 20:59:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:00:04 +0000   Mon, 11 Mar 2024 20:59:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:00:04 +0000   Mon, 11 Mar 2024 20:59:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:00:04 +0000   Mon, 11 Mar 2024 20:59:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    multinode-232100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 a13ce8e1068746bbbb0a72e87a2164be
	  System UUID:                a13ce8e1-0687-46bb-bb0a-72e87a2164be
	  Boot ID:                    22efa84c-7c90-4e9e-a8f9-b47ed9c33339
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-99hff    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kindnet-bgbtm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m44s
	  kube-system                 kube-proxy-lmrv2            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m40s                  kube-proxy  
	  Normal  Starting                 40s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m44s (x5 over 6m45s)  kubelet     Node multinode-232100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x5 over 6m45s)  kubelet     Node multinode-232100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s (x5 over 6m45s)  kubelet     Node multinode-232100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m37s                  kubelet     Node multinode-232100-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  44s (x5 over 46s)      kubelet     Node multinode-232100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x5 over 46s)      kubelet     Node multinode-232100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x5 over 46s)      kubelet     Node multinode-232100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                38s                    kubelet     Node multinode-232100-m02 status is now: NodeReady
	
	
	Name:               multinode-232100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-232100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=multinode-232100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T21_00_03_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-232100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:00:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:00:14 +0000   Mon, 11 Mar 2024 21:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:00:14 +0000   Mon, 11 Mar 2024 21:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:00:14 +0000   Mon, 11 Mar 2024 21:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:00:14 +0000   Mon, 11 Mar 2024 21:00:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    multinode-232100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f168ee14c864c1d9b961a7bac7e4eca
	  System UUID:                5f168ee1-4c86-4c1d-9b96-1a7bac7e4eca
	  Boot ID:                    67d96919-e559-4450-b9d6-0394c42ad4e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8xzct       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-proxy-vctfc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  Starting                 12s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  5m58s (x5 over 5m59s)  kubelet          Node multinode-232100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x5 over 5m59s)  kubelet          Node multinode-232100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x5 over 5m59s)  kubelet          Node multinode-232100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m51s                  kubelet          Node multinode-232100-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m18s (x5 over 5m19s)  kubelet          Node multinode-232100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m18s (x5 over 5m19s)  kubelet          Node multinode-232100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m18s (x5 over 5m19s)  kubelet          Node multinode-232100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m12s                  kubelet          Node multinode-232100-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  16s (x5 over 17s)      kubelet          Node multinode-232100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x5 over 17s)      kubelet          Node multinode-232100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s (x5 over 17s)      kubelet          Node multinode-232100-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                    node-controller  Node multinode-232100-m03 event: Registered Node multinode-232100-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-232100-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058602] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064037] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.189005] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.122829] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.261309] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +5.076626] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.065128] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.502659] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.583203] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.716230] systemd-fstab-generator[1276]: Ignoring "noauto" option for root device
	[  +0.076090] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.327725] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.814428] systemd-fstab-generator[1656]: Ignoring "noauto" option for root device
	[Mar11 20:53] kauditd_printk_skb: 80 callbacks suppressed
	[Mar11 20:58] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.151843] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +0.171943] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.143458] systemd-fstab-generator[2852]: Ignoring "noauto" option for root device
	[  +0.270687] systemd-fstab-generator[2876]: Ignoring "noauto" option for root device
	[  +0.729723] systemd-fstab-generator[2974]: Ignoring "noauto" option for root device
	[  +1.798092] systemd-fstab-generator[3104]: Ignoring "noauto" option for root device
	[  +5.834757] kauditd_printk_skb: 184 callbacks suppressed
	[Mar11 20:59] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.401521] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[ +21.035573] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61] <==
	{"level":"info","ts":"2024-03-11T20:58:49.872112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c switched to configuration voters=(5947142644092330044)"}
	{"level":"info","ts":"2024-03-11T20:58:49.872697Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d3dad3a9a0ef02b3","local-member-id":"52887eb9b9b3603c","added-peer-id":"52887eb9b9b3603c","added-peer-peer-urls":["https://192.168.39.134:2380"]}
	{"level":"info","ts":"2024-03-11T20:58:49.876488Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d3dad3a9a0ef02b3","local-member-id":"52887eb9b9b3603c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:58:49.876712Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:58:49.90162Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T20:58:49.90189Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"52887eb9b9b3603c","initial-advertise-peer-urls":["https://192.168.39.134:2380"],"listen-peer-urls":["https://192.168.39.134:2380"],"advertise-client-urls":["https://192.168.39.134:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.134:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T20:58:49.901947Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T20:58:49.906082Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-03-11T20:58:49.906168Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-03-11T20:58:51.323548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:51.32362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:51.323636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c received MsgPreVoteResp from 52887eb9b9b3603c at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:51.323661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c became candidate at term 3"}
	{"level":"info","ts":"2024-03-11T20:58:51.323671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c received MsgVoteResp from 52887eb9b9b3603c at term 3"}
	{"level":"info","ts":"2024-03-11T20:58:51.323679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c became leader at term 3"}
	{"level":"info","ts":"2024-03-11T20:58:51.323692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 52887eb9b9b3603c elected leader 52887eb9b9b3603c at term 3"}
	{"level":"info","ts":"2024-03-11T20:58:51.329709Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"52887eb9b9b3603c","local-member-attributes":"{Name:multinode-232100 ClientURLs:[https://192.168.39.134:2379]}","request-path":"/0/members/52887eb9b9b3603c/attributes","cluster-id":"d3dad3a9a0ef02b3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T20:58:51.329707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:58:51.329742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:58:51.331389Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.134:2379"}
	{"level":"info","ts":"2024-03-11T20:58:51.331637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T20:58:51.331926Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T20:58:51.331968Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T20:59:39.026632Z","caller":"traceutil/trace.go:171","msg":"trace[587982942] transaction","detail":"{read_only:false; response_revision:1018; number_of_response:1; }","duration":"210.342263ms","start":"2024-03-11T20:59:38.816261Z","end":"2024-03-11T20:59:39.026603Z","steps":["trace[587982942] 'process raft request'  (duration: 209.941596ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T21:00:13.09935Z","caller":"traceutil/trace.go:171","msg":"trace[905866632] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"162.186625ms","start":"2024-03-11T21:00:12.937125Z","end":"2024-03-11T21:00:13.099311Z","steps":["trace[905866632] 'process raft request'  (duration: 161.341618ms)"],"step_count":1}
	
	
	==> etcd [bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e] <==
	{"level":"info","ts":"2024-03-11T20:54:19.53218Z","caller":"traceutil/trace.go:171","msg":"trace[627626441] linearizableReadLoop","detail":"{readStateIndex:620; appliedIndex:619; }","duration":"197.165558ms","start":"2024-03-11T20:54:19.334981Z","end":"2024-03-11T20:54:19.532146Z","steps":["trace[627626441] 'read index received'  (duration: 81.605505ms)","trace[627626441] 'applied index is now lower than readState.Index'  (duration: 115.55887ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:54:19.53238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.413663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T20:54:19.532529Z","caller":"traceutil/trace.go:171","msg":"trace[507023335] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:591; }","duration":"197.489053ms","start":"2024-03-11T20:54:19.334951Z","end":"2024-03-11T20:54:19.53244Z","steps":["trace[507023335] 'agreement among raft nodes before linearized reading'  (duration: 197.266808ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:54:19.532742Z","caller":"traceutil/trace.go:171","msg":"trace[751447747] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"219.32659ms","start":"2024-03-11T20:54:19.313392Z","end":"2024-03-11T20:54:19.532719Z","steps":["trace[751447747] 'process raft request'  (duration: 103.252923ms)","trace[751447747] 'compare'  (duration: 113.509559ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:54:21.126637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.338815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-232100-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T20:54:21.126718Z","caller":"traceutil/trace.go:171","msg":"trace[605120623] range","detail":"{range_begin:/registry/csinodes/multinode-232100-m03; range_end:; response_count:0; response_revision:608; }","duration":"136.429877ms","start":"2024-03-11T20:54:20.990269Z","end":"2024-03-11T20:54:21.126699Z","steps":["trace[605120623] 'range keys from in-memory index tree'  (duration: 136.228144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:54:21.483599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.020513ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6934573859999376945 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-232100-m03\" mod_revision:596 > success:<request_put:<key:\"/registry/minions/multinode-232100-m03\" value_size:2405 >> failure:<request_range:<key:\"/registry/minions/multinode-232100-m03\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-11T20:54:21.483791Z","caller":"traceutil/trace.go:171","msg":"trace[1161434873] linearizableReadLoop","detail":"{readStateIndex:641; appliedIndex:640; }","duration":"217.016182ms","start":"2024-03-11T20:54:21.266757Z","end":"2024-03-11T20:54:21.483773Z","steps":["trace[1161434873] 'read index received'  (duration: 60.55493ms)","trace[1161434873] 'applied index is now lower than readState.Index'  (duration: 156.459824ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T20:54:21.483893Z","caller":"traceutil/trace.go:171","msg":"trace[1318036055] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"283.696441ms","start":"2024-03-11T20:54:21.200186Z","end":"2024-03-11T20:54:21.483883Z","steps":["trace[1318036055] 'process raft request'  (duration: 127.313917ms)","trace[1318036055] 'compare'  (duration: 155.847468ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:54:21.484136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.40439ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-vctfc\" ","response":"range_response_count:1 size:3440"}
	{"level":"info","ts":"2024-03-11T20:54:21.484189Z","caller":"traceutil/trace.go:171","msg":"trace[1090795334] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-vctfc; range_end:; response_count:1; response_revision:610; }","duration":"217.453287ms","start":"2024-03-11T20:54:21.266726Z","end":"2024-03-11T20:54:21.48418Z","steps":["trace[1090795334] 'agreement among raft nodes before linearized reading'  (duration: 217.377031ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:54:21.484078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.347659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T20:54:21.484366Z","caller":"traceutil/trace.go:171","msg":"trace[1784533007] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:610; }","duration":"146.706763ms","start":"2024-03-11T20:54:21.337649Z","end":"2024-03-11T20:54:21.484356Z","steps":["trace[1784533007] 'agreement among raft nodes before linearized reading'  (duration: 146.325041ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:54:21.643628Z","caller":"traceutil/trace.go:171","msg":"trace[1790173369] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"148.622767ms","start":"2024-03-11T20:54:21.494991Z","end":"2024-03-11T20:54:21.643613Z","steps":["trace[1790173369] 'process raft request'  (duration: 146.376767ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:57:13.818371Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-11T20:57:13.818512Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-232100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.134:2380"],"advertise-client-urls":["https://192.168.39.134:2379"]}
	{"level":"warn","ts":"2024-03-11T20:57:13.822105Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:57:13.822207Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	WARNING: 2024/03/11 20:57:13 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-11T20:57:13.882766Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.134:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:57:13.882828Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.134:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-11T20:57:13.884406Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"52887eb9b9b3603c","current-leader-member-id":"52887eb9b9b3603c"}
	{"level":"info","ts":"2024-03-11T20:57:13.887359Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-03-11T20:57:13.887487Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-03-11T20:57:13.887497Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-232100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.134:2380"],"advertise-client-urls":["https://192.168.39.134:2379"]}
	
	
	==> kernel <==
	 21:00:18 up 8 min,  0 users,  load average: 0.18, 0.25, 0.17
	Linux multinode-232100 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380] <==
	I0311 20:59:35.375971       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:59:45.385860       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:59:45.385922       1 main.go:227] handling current node
	I0311 20:59:45.385940       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:59:45.385950       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:59:45.386146       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:59:45.386180       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:59:55.426511       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:59:55.426664       1 main.go:227] handling current node
	I0311 20:59:55.426712       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:59:55.426742       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:59:55.426927       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:59:55.426975       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 21:00:05.441826       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 21:00:05.441884       1 main.go:227] handling current node
	I0311 21:00:05.441903       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 21:00:05.441913       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 21:00:05.442114       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 21:00:05.442148       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.2.0/24] 
	I0311 21:00:15.520598       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 21:00:15.520694       1 main.go:227] handling current node
	I0311 21:00:15.520724       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 21:00:15.520734       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 21:00:15.521208       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 21:00:15.521264       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67] <==
	I0311 20:56:32.532803       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:56:42.541078       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:56:42.541159       1 main.go:227] handling current node
	I0311 20:56:42.541170       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:56:42.541177       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:56:42.541268       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:56:42.541298       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:56:52.546915       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:56:52.546968       1 main.go:227] handling current node
	I0311 20:56:52.546996       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:56:52.547059       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:56:52.547183       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:56:52.547216       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:57:02.560654       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:57:02.560713       1 main.go:227] handling current node
	I0311 20:57:02.560723       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:57:02.560730       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:57:02.560902       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:57:02.560938       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:57:12.568631       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:57:12.568693       1 main.go:227] handling current node
	I0311 20:57:12.568704       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:57:12.568710       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:57:12.568827       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:57:12.568864       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62] <==
	I0311 20:58:52.808258       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 20:58:52.808520       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0311 20:58:52.808525       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0311 20:58:52.893654       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 20:58:52.894255       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 20:58:52.902715       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 20:58:52.902797       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 20:58:52.908543       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 20:58:52.909335       1 aggregator.go:166] initial CRD sync complete...
	I0311 20:58:52.910713       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 20:58:52.910769       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 20:58:52.910794       1 cache.go:39] Caches are synced for autoregister controller
	I0311 20:58:52.918510       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 20:58:52.925876       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0311 20:58:52.925916       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0311 20:58:52.954627       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0311 20:58:52.960679       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0311 20:58:53.795397       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 20:58:55.727584       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0311 20:58:55.882474       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0311 20:58:55.898388       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0311 20:58:55.971358       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 20:58:55.980174       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0311 20:59:05.381982       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 20:59:05.418066       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73] <==
	I0311 20:57:13.831418       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0311 20:57:13.831617       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0311 20:57:13.831669       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0311 20:57:13.831712       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0311 20:57:13.831755       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0311 20:57:13.831773       1 establishing_controller.go:87] Shutting down EstablishingController
	I0311 20:57:13.831795       1 naming_controller.go:302] Shutting down NamingConditionController
	I0311 20:57:13.831817       1 controller.go:162] Shutting down OpenAPI controller
	I0311 20:57:13.832423       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0311 20:57:13.832650       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0311 20:57:13.832669       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0311 20:57:13.832680       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0311 20:57:13.833106       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0311 20:57:13.838294       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0311 20:57:13.838640       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c1be1c0)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0311 20:57:13.838725       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0311 20:57:13.838952       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0311 20:57:13.839989       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0311 20:57:13.840164       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0311 20:57:13.840303       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0311 20:57:13.840426       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0311 20:57:13.843407       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0311 20:57:13.843563       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0311 20:57:13.826469       1 controller.go:129] Ending legacy_token_tracking_controller
	I0311 20:57:13.843735       1 controller.go:130] Shutting down legacy_token_tracking_controller
	
	
	==> kube-controller-manager [1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9] <==
	I0311 20:54:20.801933       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:54:20.803480       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232100-m03\" does not exist"
	I0311 20:54:20.818309       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-232100-m03" podCIDRs=["10.244.2.0/24"]
	I0311 20:54:20.838888       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vctfc"
	I0311 20:54:20.841298       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8xzct"
	I0311 20:54:23.177297       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-232100-m03"
	I0311 20:54:23.177544       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-232100-m03 event: Registered Node multinode-232100-m03 in Controller"
	I0311 20:54:27.514663       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:54:58.197254       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:54:58.198114       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-232100-m03 event: Removing Node multinode-232100-m03 from Controller"
	I0311 20:55:00.969712       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:55:00.970181       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232100-m03\" does not exist"
	I0311 20:55:00.993319       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-232100-m03" podCIDRs=["10.244.3.0/24"]
	I0311 20:55:03.199186       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-232100-m03 event: Registered Node multinode-232100-m03 in Controller"
	I0311 20:55:06.955075       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m03"
	I0311 20:55:48.233076       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m03"
	I0311 20:55:48.233165       1 event.go:307] "Event occurred" object="multinode-232100-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-232100-m02 status is now: NodeNotReady"
	I0311 20:55:48.241875       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-232100-m03 status is now: NodeNotReady"
	I0311 20:55:48.255992       1 event.go:307] "Event occurred" object="kube-system/kindnet-bgbtm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.259722       1 event.go:307] "Event occurred" object="kube-system/kindnet-8xzct" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.277664       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-lmrv2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.277715       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vctfc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.293962       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8xhwm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.303942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.455556ms"
	I0311 20:55:48.304314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.17µs"
	
	
	==> kube-controller-manager [9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504] <==
	I0311 20:59:28.205883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.237343ms"
	I0311 20:59:28.205993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="31.737µs"
	I0311 20:59:34.046213       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232100-m02\" does not exist"
	I0311 20:59:34.048275       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8xhwm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-8xhwm"
	I0311 20:59:34.069991       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-232100-m02" podCIDRs=["10.244.1.0/24"]
	I0311 20:59:35.557284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="122.656µs"
	I0311 20:59:35.577399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.564µs"
	I0311 20:59:35.589640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66.937µs"
	I0311 20:59:35.595345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="87.065µs"
	I0311 20:59:35.601110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="161.875µs"
	I0311 20:59:35.601784       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8xhwm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-8xhwm"
	I0311 20:59:36.489550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.236µs"
	I0311 20:59:40.556761       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:59:40.578423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.207µs"
	I0311 20:59:40.595966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.874µs"
	I0311 20:59:42.845000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.174641ms"
	I0311 20:59:42.845420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.386µs"
	I0311 20:59:45.482594       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-99hff" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-99hff"
	I0311 21:00:00.123590       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 21:00:00.485745       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-232100-m03 event: Removing Node multinode-232100-m03 from Controller"
	I0311 21:00:02.809650       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232100-m03\" does not exist"
	I0311 21:00:02.811152       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 21:00:02.833357       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-232100-m03" podCIDRs=["10.244.2.0/24"]
	I0311 21:00:05.486135       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-232100-m03 event: Registered Node multinode-232100-m03 in Controller"
	I0311 21:00:14.907539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	
	
	==> kube-proxy [2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01] <==
	I0311 20:58:54.558844       1 server_others.go:69] "Using iptables proxy"
	I0311 20:58:54.571078       1 node.go:141] Successfully retrieved node IP: 192.168.39.134
	I0311 20:58:54.631369       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:58:54.631427       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:58:54.636716       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:58:54.636799       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:58:54.637141       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:58:54.637177       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:58:54.637961       1 config.go:188] "Starting service config controller"
	I0311 20:58:54.638127       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:58:54.638187       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:58:54.638192       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:58:54.638650       1 config.go:315] "Starting node config controller"
	I0311 20:58:54.638693       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:58:54.740157       1 shared_informer.go:318] Caches are synced for node config
	I0311 20:58:54.740185       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:58:54.740212       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce] <==
	I0311 20:52:59.114435       1 server_others.go:69] "Using iptables proxy"
	I0311 20:52:59.130558       1 node.go:141] Successfully retrieved node IP: 192.168.39.134
	I0311 20:52:59.278550       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:52:59.278598       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:52:59.283407       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:52:59.283468       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:52:59.283626       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:52:59.283666       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:52:59.284945       1 config.go:188] "Starting service config controller"
	I0311 20:52:59.285125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:52:59.285223       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:52:59.285244       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:52:59.287395       1 config.go:315] "Starting node config controller"
	I0311 20:52:59.287434       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:52:59.386249       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 20:52:59.386285       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:52:59.387606       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae] <==
	E0311 20:52:43.223905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 20:52:43.227109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 20:52:43.227181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 20:52:44.065300       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 20:52:44.065407       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 20:52:44.103530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 20:52:44.103650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 20:52:44.113213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 20:52:44.113232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 20:52:44.191517       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:52:44.191576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 20:52:44.249764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 20:52:44.249818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 20:52:44.260214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 20:52:44.260266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 20:52:44.330582       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 20:52:44.330631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 20:52:44.428087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 20:52:44.428137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 20:52:44.437703       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 20:52:44.437751       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0311 20:52:46.111110       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 20:57:13.813864       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0311 20:57:13.816234       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0311 20:57:13.816487       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f] <==
	I0311 20:58:50.648637       1 serving.go:348] Generated self-signed cert in-memory
	W0311 20:58:52.861471       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 20:58:52.862093       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 20:58:52.864095       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 20:58:52.864223       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 20:58:52.912364       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0311 20:58:52.912594       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:58:52.923870       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 20:58:52.924164       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 20:58:52.926350       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 20:58:52.924191       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 20:58:53.026855       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.560059    3111 topology_manager.go:215] "Topology Admit Handler" podUID="c2b9427c-06b4-4f56-bc4a-4adc16471a65" podNamespace="kube-system" podName="coredns-5dd5756b68-5mg4g"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.560187    3111 topology_manager.go:215] "Topology Admit Handler" podUID="32d28c9d-7ec7-44b0-9dbd-039296a7a274" podNamespace="kube-system" podName="storage-provisioner"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.560289    3111 topology_manager.go:215] "Topology Admit Handler" podUID="e93127ae-9454-4660-9b50-359d12adcffe" podNamespace="default" podName="busybox-5b5d89c9d6-4hsnz"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.566876    3111 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.655787    3111 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71289465-761a-45e9-aeea-487886492715-lib-modules\") pod \"kube-proxy-zdkdk\" (UID: \"71289465-761a-45e9-aeea-487886492715\") " pod="kube-system/kube-proxy-zdkdk"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.656103    3111 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a818af00-dedc-4df2-98f0-0f657141080e-xtables-lock\") pod \"kindnet-glj55\" (UID: \"a818af00-dedc-4df2-98f0-0f657141080e\") " pod="kube-system/kindnet-glj55"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.656292    3111 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a818af00-dedc-4df2-98f0-0f657141080e-lib-modules\") pod \"kindnet-glj55\" (UID: \"a818af00-dedc-4df2-98f0-0f657141080e\") " pod="kube-system/kindnet-glj55"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.656456    3111 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71289465-761a-45e9-aeea-487886492715-xtables-lock\") pod \"kube-proxy-zdkdk\" (UID: \"71289465-761a-45e9-aeea-487886492715\") " pod="kube-system/kube-proxy-zdkdk"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.656516    3111 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a818af00-dedc-4df2-98f0-0f657141080e-cni-cfg\") pod \"kindnet-glj55\" (UID: \"a818af00-dedc-4df2-98f0-0f657141080e\") " pod="kube-system/kindnet-glj55"
	Mar 11 20:58:53 multinode-232100 kubelet[3111]: I0311 20:58:53.656535    3111 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/32d28c9d-7ec7-44b0-9dbd-039296a7a274-tmp\") pod \"storage-provisioner\" (UID: \"32d28c9d-7ec7-44b0-9dbd-039296a7a274\") " pod="kube-system/storage-provisioner"
	Mar 11 20:59:01 multinode-232100 kubelet[3111]: I0311 20:59:01.961982    3111 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.611615    3111 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 20:59:48 multinode-232100 kubelet[3111]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 20:59:48 multinode-232100 kubelet[3111]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 20:59:48 multinode-232100 kubelet[3111]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 20:59:48 multinode-232100 kubelet[3111]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.667345    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/podc2b9427c-06b4-4f56-bc4a-4adc16471a65/crio-62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722: Error finding container 62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722: Status 404 returned error can't find the container with id 62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.667676    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode93127ae-9454-4660-9b50-359d12adcffe/crio-7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97: Error finding container 7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97: Status 404 returned error can't find the container with id 7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.668082    3111 manager.go:1106] Failed to create existing container: /kubepods/poda818af00-dedc-4df2-98f0-0f657141080e/crio-71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66: Error finding container 71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66: Status 404 returned error can't find the container with id 71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.668561    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod03d430d93ac79511930f8ee4e584b8a9/crio-7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb: Error finding container 7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb: Status 404 returned error can't find the container with id 7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.669086    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod32d28c9d-7ec7-44b0-9dbd-039296a7a274/crio-e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5: Error finding container e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5: Status 404 returned error can't find the container with id e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.669502    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod71289465-761a-45e9-aeea-487886492715/crio-f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce: Error finding container f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce: Status 404 returned error can't find the container with id f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.669825    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/podc755fbdb681fc0a3c29e9c4a4faa661d/crio-1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86: Error finding container 1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86: Status 404 returned error can't find the container with id 1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.670258    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pode47e5bbe85a59f76ef5b1b2f838a8fd1/crio-e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820: Error finding container e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820: Status 404 returned error can't find the container with id e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820
	Mar 11 20:59:48 multinode-232100 kubelet[3111]: E0311 20:59:48.670608    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod0e6c74ae7825d32a30354efaeda334ed/crio-3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde: Error finding container 3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde: Status 404 returned error can't find the container with id 3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:00:17.607767   44008 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18358-11004/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-232100 -n multinode-232100
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-232100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (309.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 stop
E0311 21:01:58.809916   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-232100 stop: exit status 82 (2m0.481078186s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-232100-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-232100 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status
E0311 21:02:38.935168   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-232100 status: exit status 3 (18.776314262s)

                                                
                                                
-- stdout --
	multinode-232100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-232100-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:02:41.385019   44546 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	E0311 21:02:41.385052   44546 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-232100 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-232100 -n multinode-232100
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-232100 logs -n 25: (1.612977858s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m02:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100:/home/docker/cp-test_multinode-232100-m02_multinode-232100.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n multinode-232100 sudo cat                                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /home/docker/cp-test_multinode-232100-m02_multinode-232100.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m02:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03:/home/docker/cp-test_multinode-232100-m02_multinode-232100-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n multinode-232100-m03 sudo cat                                   | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /home/docker/cp-test_multinode-232100-m02_multinode-232100-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp testdata/cp-test.txt                                                | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile149036959/001/cp-test_multinode-232100-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100:/home/docker/cp-test_multinode-232100-m03_multinode-232100.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n multinode-232100 sudo cat                                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /home/docker/cp-test_multinode-232100-m03_multinode-232100.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt                       | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m02:/home/docker/cp-test_multinode-232100-m03_multinode-232100-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n                                                                 | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | multinode-232100-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-232100 ssh -n multinode-232100-m02 sudo cat                                   | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	|         | /home/docker/cp-test_multinode-232100-m03_multinode-232100-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-232100 node stop m03                                                          | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:54 UTC |
	| node    | multinode-232100 node start                                                             | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:54 UTC | 11 Mar 24 20:55 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-232100                                                                | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:55 UTC |                     |
	| stop    | -p multinode-232100                                                                     | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:55 UTC |                     |
	| start   | -p multinode-232100                                                                     | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 20:57 UTC | 11 Mar 24 21:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-232100                                                                | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 21:00 UTC |                     |
	| node    | multinode-232100 node delete                                                            | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 21:00 UTC | 11 Mar 24 21:00 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-232100 stop                                                                   | multinode-232100 | jenkins | v1.32.0 | 11 Mar 24 21:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:57:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:57:12.643792   43208 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:57:12.644056   43208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:57:12.644065   43208 out.go:304] Setting ErrFile to fd 2...
	I0311 20:57:12.644069   43208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:57:12.644241   43208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:57:12.644728   43208 out.go:298] Setting JSON to false
	I0311 20:57:12.645636   43208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5982,"bootTime":1710184651,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:57:12.645695   43208 start.go:139] virtualization: kvm guest
	I0311 20:57:12.648022   43208 out.go:177] * [multinode-232100] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:57:12.649372   43208 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:57:12.650651   43208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:57:12.649374   43208 notify.go:220] Checking for updates...
	I0311 20:57:12.652097   43208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:57:12.653465   43208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:57:12.654752   43208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:57:12.656138   43208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:57:12.658068   43208 config.go:182] Loaded profile config "multinode-232100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:57:12.658156   43208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:57:12.658536   43208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:57:12.658579   43208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:57:12.673100   43208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34003
	I0311 20:57:12.673499   43208 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:57:12.674138   43208 main.go:141] libmachine: Using API Version  1
	I0311 20:57:12.674184   43208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:57:12.674551   43208 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:57:12.674743   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:57:12.709161   43208 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 20:57:12.710564   43208 start.go:297] selected driver: kvm2
	I0311 20:57:12.710581   43208 start.go:901] validating driver "kvm2" against &{Name:multinode-232100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-232100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:57:12.710694   43208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:57:12.710992   43208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:57:12.711054   43208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 20:57:12.726360   43208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 20:57:12.727011   43208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 20:57:12.727042   43208 cni.go:84] Creating CNI manager for ""
	I0311 20:57:12.727049   43208 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0311 20:57:12.727112   43208 start.go:340] cluster config:
	{Name:multinode-232100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-232100 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:57:12.727219   43208 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:57:12.728858   43208 out.go:177] * Starting "multinode-232100" primary control-plane node in "multinode-232100" cluster
	I0311 20:57:12.730037   43208 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:57:12.730064   43208 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 20:57:12.730077   43208 cache.go:56] Caching tarball of preloaded images
	I0311 20:57:12.730156   43208 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 20:57:12.730170   43208 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 20:57:12.730314   43208 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/config.json ...
	I0311 20:57:12.730535   43208 start.go:360] acquireMachinesLock for multinode-232100: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 20:57:12.730585   43208 start.go:364] duration metric: took 31.267µs to acquireMachinesLock for "multinode-232100"
	I0311 20:57:12.730604   43208 start.go:96] Skipping create...Using existing machine configuration
	I0311 20:57:12.730613   43208 fix.go:54] fixHost starting: 
	I0311 20:57:12.730939   43208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:57:12.730973   43208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:57:12.743949   43208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0311 20:57:12.744357   43208 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:57:12.744818   43208 main.go:141] libmachine: Using API Version  1
	I0311 20:57:12.744842   43208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:57:12.745200   43208 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:57:12.745419   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:57:12.745599   43208 main.go:141] libmachine: (multinode-232100) Calling .GetState
	I0311 20:57:12.747199   43208 fix.go:112] recreateIfNeeded on multinode-232100: state=Running err=<nil>
	W0311 20:57:12.747215   43208 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 20:57:12.749788   43208 out.go:177] * Updating the running kvm2 "multinode-232100" VM ...
	I0311 20:57:12.751148   43208 machine.go:94] provisionDockerMachine start ...
	I0311 20:57:12.751162   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:57:12.751352   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:12.754082   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:12.754467   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:12.754494   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:12.754632   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:12.754807   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:12.754962   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:12.755081   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:12.755229   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:57:12.755452   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:57:12.755469   43208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 20:57:12.867207   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232100
	
	I0311 20:57:12.867232   43208 main.go:141] libmachine: (multinode-232100) Calling .GetMachineName
	I0311 20:57:12.867472   43208 buildroot.go:166] provisioning hostname "multinode-232100"
	I0311 20:57:12.867501   43208 main.go:141] libmachine: (multinode-232100) Calling .GetMachineName
	I0311 20:57:12.867669   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:12.870123   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:12.870478   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:12.870505   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:12.870685   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:12.870887   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:12.871034   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:12.871171   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:12.871311   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:57:12.871448   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:57:12.871460   43208 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-232100 && echo "multinode-232100" | sudo tee /etc/hostname
	I0311 20:57:12.997275   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-232100
	
	I0311 20:57:12.997302   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:13.000031   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.000378   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.000407   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.000581   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:13.000762   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.000936   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.001081   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:13.001236   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:57:13.001402   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:57:13.001419   43208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-232100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-232100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-232100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 20:57:13.110286   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 20:57:13.110315   43208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 20:57:13.110383   43208 buildroot.go:174] setting up certificates
	I0311 20:57:13.110393   43208 provision.go:84] configureAuth start
	I0311 20:57:13.110402   43208 main.go:141] libmachine: (multinode-232100) Calling .GetMachineName
	I0311 20:57:13.110662   43208 main.go:141] libmachine: (multinode-232100) Calling .GetIP
	I0311 20:57:13.113179   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.113521   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.113546   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.113718   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:13.115812   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.116182   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.116214   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.116325   43208 provision.go:143] copyHostCerts
	I0311 20:57:13.116356   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:57:13.116391   43208 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 20:57:13.116401   43208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 20:57:13.116466   43208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 20:57:13.116548   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:57:13.116566   43208 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 20:57:13.116570   43208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 20:57:13.116593   43208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 20:57:13.116648   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:57:13.116669   43208 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 20:57:13.116676   43208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 20:57:13.116697   43208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 20:57:13.116776   43208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.multinode-232100 san=[127.0.0.1 192.168.39.134 localhost minikube multinode-232100]
	I0311 20:57:13.487482   43208 provision.go:177] copyRemoteCerts
	I0311 20:57:13.487536   43208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 20:57:13.487558   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:13.490067   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.490382   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.490408   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.490593   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:13.490789   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.490931   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:13.491061   43208 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:57:13.581296   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0311 20:57:13.581361   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 20:57:13.610600   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0311 20:57:13.610654   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0311 20:57:13.637911   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0311 20:57:13.637962   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 20:57:13.664942   43208 provision.go:87] duration metric: took 554.538819ms to configureAuth
	I0311 20:57:13.664966   43208 buildroot.go:189] setting minikube options for container-runtime
	I0311 20:57:13.665169   43208 config.go:182] Loaded profile config "multinode-232100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:57:13.665231   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:57:13.667769   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.668145   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:57:13.668191   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:57:13.668324   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:57:13.668526   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.668667   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:57:13.668800   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:57:13.668944   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:57:13.669122   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:57:13.669137   43208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 20:58:44.389199   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 20:58:44.389228   43208 machine.go:97] duration metric: took 1m31.638069174s to provisionDockerMachine
	I0311 20:58:44.389241   43208 start.go:293] postStartSetup for "multinode-232100" (driver="kvm2")
	I0311 20:58:44.389251   43208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 20:58:44.389267   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.389600   43208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 20:58:44.389635   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:58:44.392857   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.393275   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.393304   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.393447   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:58:44.393628   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.393791   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:58:44.393935   43208 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:58:44.477468   43208 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 20:58:44.481878   43208 command_runner.go:130] > NAME=Buildroot
	I0311 20:58:44.481893   43208 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0311 20:58:44.481897   43208 command_runner.go:130] > ID=buildroot
	I0311 20:58:44.481910   43208 command_runner.go:130] > VERSION_ID=2023.02.9
	I0311 20:58:44.481916   43208 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0311 20:58:44.482187   43208 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 20:58:44.482204   43208 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 20:58:44.482262   43208 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 20:58:44.482355   43208 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 20:58:44.482374   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /etc/ssl/certs/182352.pem
	I0311 20:58:44.482458   43208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 20:58:44.493122   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:58:44.519005   43208 start.go:296] duration metric: took 129.752749ms for postStartSetup
	I0311 20:58:44.519071   43208 fix.go:56] duration metric: took 1m31.78845688s for fixHost
	I0311 20:58:44.519099   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:58:44.521496   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.521835   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.521863   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.521977   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:58:44.522161   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.522326   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.522464   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:58:44.522625   43208 main.go:141] libmachine: Using SSH client type: native
	I0311 20:58:44.522771   43208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0311 20:58:44.522783   43208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 20:58:44.625720   43208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710190724.606533979
	
	I0311 20:58:44.625740   43208 fix.go:216] guest clock: 1710190724.606533979
	I0311 20:58:44.625749   43208 fix.go:229] Guest: 2024-03-11 20:58:44.606533979 +0000 UTC Remote: 2024-03-11 20:58:44.519082181 +0000 UTC m=+91.921532697 (delta=87.451798ms)
	I0311 20:58:44.625792   43208 fix.go:200] guest clock delta is within tolerance: 87.451798ms
	I0311 20:58:44.625798   43208 start.go:83] releasing machines lock for "multinode-232100", held for 1m31.895201285s
	I0311 20:58:44.625849   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.626123   43208 main.go:141] libmachine: (multinode-232100) Calling .GetIP
	I0311 20:58:44.628318   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.628775   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.628817   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.628967   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.629515   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.629689   43208 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:58:44.629760   43208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 20:58:44.629818   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:58:44.629918   43208 ssh_runner.go:195] Run: cat /version.json
	I0311 20:58:44.629946   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:58:44.632160   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.632472   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.632499   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.632518   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.632622   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:58:44.632815   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.632935   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:44.632953   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:44.632971   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:58:44.633143   43208 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:58:44.633212   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:58:44.633355   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:58:44.633492   43208 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:58:44.633629   43208 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:58:44.729119   43208 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0311 20:58:44.729811   43208 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0311 20:58:44.729956   43208 ssh_runner.go:195] Run: systemctl --version
	I0311 20:58:44.736257   43208 command_runner.go:130] > systemd 252 (252)
	I0311 20:58:44.736300   43208 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0311 20:58:44.736358   43208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 20:58:44.903779   43208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 20:58:44.912761   43208 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0311 20:58:44.913320   43208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 20:58:44.913383   43208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 20:58:44.923445   43208 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 20:58:44.923465   43208 start.go:494] detecting cgroup driver to use...
	I0311 20:58:44.923520   43208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 20:58:44.941102   43208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 20:58:44.955088   43208 docker.go:217] disabling cri-docker service (if available) ...
	I0311 20:58:44.955127   43208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 20:58:44.970691   43208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 20:58:44.986246   43208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 20:58:45.136855   43208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 20:58:45.282417   43208 docker.go:233] disabling docker service ...
	I0311 20:58:45.282504   43208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 20:58:45.301648   43208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 20:58:45.315745   43208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 20:58:45.456271   43208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 20:58:45.608200   43208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 20:58:45.625497   43208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 20:58:45.648101   43208 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0311 20:58:45.648562   43208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 20:58:45.648615   43208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:58:45.659704   43208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 20:58:45.659761   43208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:58:45.671881   43208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:58:45.683461   43208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 20:58:45.695287   43208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 20:58:45.706500   43208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 20:58:45.716059   43208 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0311 20:58:45.716212   43208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 20:58:45.726043   43208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:58:45.867507   43208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 20:58:46.110860   43208 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 20:58:46.110937   43208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 20:58:46.117051   43208 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0311 20:58:46.117069   43208 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0311 20:58:46.117075   43208 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0311 20:58:46.117084   43208 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0311 20:58:46.117092   43208 command_runner.go:130] > Access: 2024-03-11 20:58:45.990375949 +0000
	I0311 20:58:46.117102   43208 command_runner.go:130] > Modify: 2024-03-11 20:58:45.981375587 +0000
	I0311 20:58:46.117111   43208 command_runner.go:130] > Change: 2024-03-11 20:58:45.981375587 +0000
	I0311 20:58:46.117122   43208 command_runner.go:130] >  Birth: -
	I0311 20:58:46.117265   43208 start.go:562] Will wait 60s for crictl version
	I0311 20:58:46.117306   43208 ssh_runner.go:195] Run: which crictl
	I0311 20:58:46.121613   43208 command_runner.go:130] > /usr/bin/crictl
	I0311 20:58:46.121656   43208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 20:58:46.160039   43208 command_runner.go:130] > Version:  0.1.0
	I0311 20:58:46.160061   43208 command_runner.go:130] > RuntimeName:  cri-o
	I0311 20:58:46.160069   43208 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0311 20:58:46.160076   43208 command_runner.go:130] > RuntimeApiVersion:  v1
	I0311 20:58:46.160099   43208 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 20:58:46.160158   43208 ssh_runner.go:195] Run: crio --version
	I0311 20:58:46.189699   43208 command_runner.go:130] > crio version 1.29.1
	I0311 20:58:46.189721   43208 command_runner.go:130] > Version:        1.29.1
	I0311 20:58:46.189726   43208 command_runner.go:130] > GitCommit:      unknown
	I0311 20:58:46.189730   43208 command_runner.go:130] > GitCommitDate:  unknown
	I0311 20:58:46.189744   43208 command_runner.go:130] > GitTreeState:   clean
	I0311 20:58:46.189750   43208 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0311 20:58:46.189755   43208 command_runner.go:130] > GoVersion:      go1.21.6
	I0311 20:58:46.189759   43208 command_runner.go:130] > Compiler:       gc
	I0311 20:58:46.189766   43208 command_runner.go:130] > Platform:       linux/amd64
	I0311 20:58:46.189770   43208 command_runner.go:130] > Linkmode:       dynamic
	I0311 20:58:46.189774   43208 command_runner.go:130] > BuildTags:      
	I0311 20:58:46.189779   43208 command_runner.go:130] >   containers_image_ostree_stub
	I0311 20:58:46.189783   43208 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0311 20:58:46.189787   43208 command_runner.go:130] >   btrfs_noversion
	I0311 20:58:46.189794   43208 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0311 20:58:46.189800   43208 command_runner.go:130] >   libdm_no_deferred_remove
	I0311 20:58:46.189805   43208 command_runner.go:130] >   seccomp
	I0311 20:58:46.189812   43208 command_runner.go:130] > LDFlags:          unknown
	I0311 20:58:46.189818   43208 command_runner.go:130] > SeccompEnabled:   true
	I0311 20:58:46.189828   43208 command_runner.go:130] > AppArmorEnabled:  false
	I0311 20:58:46.191233   43208 ssh_runner.go:195] Run: crio --version
	I0311 20:58:46.221405   43208 command_runner.go:130] > crio version 1.29.1
	I0311 20:58:46.221425   43208 command_runner.go:130] > Version:        1.29.1
	I0311 20:58:46.221432   43208 command_runner.go:130] > GitCommit:      unknown
	I0311 20:58:46.221436   43208 command_runner.go:130] > GitCommitDate:  unknown
	I0311 20:58:46.221440   43208 command_runner.go:130] > GitTreeState:   clean
	I0311 20:58:46.221447   43208 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0311 20:58:46.221453   43208 command_runner.go:130] > GoVersion:      go1.21.6
	I0311 20:58:46.221460   43208 command_runner.go:130] > Compiler:       gc
	I0311 20:58:46.221487   43208 command_runner.go:130] > Platform:       linux/amd64
	I0311 20:58:46.221498   43208 command_runner.go:130] > Linkmode:       dynamic
	I0311 20:58:46.221502   43208 command_runner.go:130] > BuildTags:      
	I0311 20:58:46.221506   43208 command_runner.go:130] >   containers_image_ostree_stub
	I0311 20:58:46.221511   43208 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0311 20:58:46.221516   43208 command_runner.go:130] >   btrfs_noversion
	I0311 20:58:46.221520   43208 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0311 20:58:46.221527   43208 command_runner.go:130] >   libdm_no_deferred_remove
	I0311 20:58:46.221530   43208 command_runner.go:130] >   seccomp
	I0311 20:58:46.221534   43208 command_runner.go:130] > LDFlags:          unknown
	I0311 20:58:46.221542   43208 command_runner.go:130] > SeccompEnabled:   true
	I0311 20:58:46.221549   43208 command_runner.go:130] > AppArmorEnabled:  false
	I0311 20:58:46.225382   43208 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 20:58:46.226927   43208 main.go:141] libmachine: (multinode-232100) Calling .GetIP
	I0311 20:58:46.229474   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:46.229844   43208 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:58:46.229872   43208 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:58:46.230083   43208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 20:58:46.234716   43208 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0311 20:58:46.234788   43208 kubeadm.go:877] updating cluster {Name:multinode-232100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-232100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 20:58:46.234903   43208 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 20:58:46.234950   43208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:58:46.285359   43208 command_runner.go:130] > {
	I0311 20:58:46.285380   43208 command_runner.go:130] >   "images": [
	I0311 20:58:46.285384   43208 command_runner.go:130] >     {
	I0311 20:58:46.285395   43208 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0311 20:58:46.285409   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285418   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0311 20:58:46.285423   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285427   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.285437   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0311 20:58:46.285451   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0311 20:58:46.285461   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285472   43208 command_runner.go:130] >       "size": "65258016",
	I0311 20:58:46.285483   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.285489   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.285502   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.285509   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.285512   43208 command_runner.go:130] >     },
	I0311 20:58:46.285515   43208 command_runner.go:130] >     {
	I0311 20:58:46.285524   43208 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0311 20:58:46.285533   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285545   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0311 20:58:46.285555   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285565   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.285579   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0311 20:58:46.285589   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0311 20:58:46.285595   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285600   43208 command_runner.go:130] >       "size": "65291810",
	I0311 20:58:46.285606   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.285612   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.285620   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.285630   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.285639   43208 command_runner.go:130] >     },
	I0311 20:58:46.285654   43208 command_runner.go:130] >     {
	I0311 20:58:46.285666   43208 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0311 20:58:46.285676   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285686   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0311 20:58:46.285692   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285696   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.285707   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0311 20:58:46.285722   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0311 20:58:46.285738   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285749   43208 command_runner.go:130] >       "size": "1363676",
	I0311 20:58:46.285758   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.285768   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.285777   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.285784   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.285787   43208 command_runner.go:130] >     },
	I0311 20:58:46.285797   43208 command_runner.go:130] >     {
	I0311 20:58:46.285807   43208 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0311 20:58:46.285817   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285828   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0311 20:58:46.285837   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285846   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.285861   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0311 20:58:46.285882   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0311 20:58:46.285893   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285901   43208 command_runner.go:130] >       "size": "31470524",
	I0311 20:58:46.285907   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.285917   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.285926   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.285936   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.285944   43208 command_runner.go:130] >     },
	I0311 20:58:46.285952   43208 command_runner.go:130] >     {
	I0311 20:58:46.285960   43208 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0311 20:58:46.285967   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.285979   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0311 20:58:46.285989   43208 command_runner.go:130] >       ],
	I0311 20:58:46.285999   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286011   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0311 20:58:46.286026   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0311 20:58:46.286035   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286043   43208 command_runner.go:130] >       "size": "53621675",
	I0311 20:58:46.286047   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.286056   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286065   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286076   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286091   43208 command_runner.go:130] >     },
	I0311 20:58:46.286100   43208 command_runner.go:130] >     {
	I0311 20:58:46.286111   43208 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0311 20:58:46.286121   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286130   43208 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0311 20:58:46.286137   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286143   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286157   43208 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0311 20:58:46.286172   43208 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0311 20:58:46.286182   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286191   43208 command_runner.go:130] >       "size": "295456551",
	I0311 20:58:46.286201   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286209   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.286217   43208 command_runner.go:130] >       },
	I0311 20:58:46.286225   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286229   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286238   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286247   43208 command_runner.go:130] >     },
	I0311 20:58:46.286256   43208 command_runner.go:130] >     {
	I0311 20:58:46.286269   43208 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0311 20:58:46.286278   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286289   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0311 20:58:46.286298   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286307   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286317   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0311 20:58:46.286329   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0311 20:58:46.286339   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286349   43208 command_runner.go:130] >       "size": "127226832",
	I0311 20:58:46.286358   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286367   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.286375   43208 command_runner.go:130] >       },
	I0311 20:58:46.286385   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286394   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286401   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286404   43208 command_runner.go:130] >     },
	I0311 20:58:46.286412   43208 command_runner.go:130] >     {
	I0311 20:58:46.286429   43208 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0311 20:58:46.286439   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286447   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0311 20:58:46.286453   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286459   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286487   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0311 20:58:46.286498   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0311 20:58:46.286503   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286510   43208 command_runner.go:130] >       "size": "123261750",
	I0311 20:58:46.286516   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286521   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.286527   43208 command_runner.go:130] >       },
	I0311 20:58:46.286533   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286539   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286548   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286552   43208 command_runner.go:130] >     },
	I0311 20:58:46.286557   43208 command_runner.go:130] >     {
	I0311 20:58:46.286566   43208 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0311 20:58:46.286573   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286579   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0311 20:58:46.286585   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286592   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286601   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0311 20:58:46.286611   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0311 20:58:46.286616   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286622   43208 command_runner.go:130] >       "size": "74749335",
	I0311 20:58:46.286627   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.286634   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286639   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286655   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286661   43208 command_runner.go:130] >     },
	I0311 20:58:46.286666   43208 command_runner.go:130] >     {
	I0311 20:58:46.286676   43208 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0311 20:58:46.286682   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286690   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0311 20:58:46.286696   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286711   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286723   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0311 20:58:46.286730   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0311 20:58:46.286736   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286740   43208 command_runner.go:130] >       "size": "61551410",
	I0311 20:58:46.286743   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286747   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.286751   43208 command_runner.go:130] >       },
	I0311 20:58:46.286755   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286758   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286763   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.286766   43208 command_runner.go:130] >     },
	I0311 20:58:46.286769   43208 command_runner.go:130] >     {
	I0311 20:58:46.286775   43208 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0311 20:58:46.286780   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.286784   43208 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0311 20:58:46.286787   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286791   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.286798   43208 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0311 20:58:46.286805   43208 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0311 20:58:46.286808   43208 command_runner.go:130] >       ],
	I0311 20:58:46.286820   43208 command_runner.go:130] >       "size": "750414",
	I0311 20:58:46.286823   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.286827   43208 command_runner.go:130] >         "value": "65535"
	I0311 20:58:46.286830   43208 command_runner.go:130] >       },
	I0311 20:58:46.286834   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.286841   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.286845   43208 command_runner.go:130] >       "pinned": true
	I0311 20:58:46.286848   43208 command_runner.go:130] >     }
	I0311 20:58:46.286851   43208 command_runner.go:130] >   ]
	I0311 20:58:46.286854   43208 command_runner.go:130] > }
	I0311 20:58:46.287020   43208 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:58:46.287030   43208 crio.go:415] Images already preloaded, skipping extraction
	I0311 20:58:46.287067   43208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 20:58:46.322073   43208 command_runner.go:130] > {
	I0311 20:58:46.322103   43208 command_runner.go:130] >   "images": [
	I0311 20:58:46.322111   43208 command_runner.go:130] >     {
	I0311 20:58:46.322120   43208 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0311 20:58:46.322126   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322132   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0311 20:58:46.322136   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322140   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322151   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0311 20:58:46.322160   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0311 20:58:46.322166   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322171   43208 command_runner.go:130] >       "size": "65258016",
	I0311 20:58:46.322175   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322179   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322184   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322193   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322197   43208 command_runner.go:130] >     },
	I0311 20:58:46.322201   43208 command_runner.go:130] >     {
	I0311 20:58:46.322209   43208 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0311 20:58:46.322213   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322218   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0311 20:58:46.322222   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322227   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322234   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0311 20:58:46.322241   43208 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0311 20:58:46.322247   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322250   43208 command_runner.go:130] >       "size": "65291810",
	I0311 20:58:46.322257   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322264   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322270   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322274   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322280   43208 command_runner.go:130] >     },
	I0311 20:58:46.322284   43208 command_runner.go:130] >     {
	I0311 20:58:46.322292   43208 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0311 20:58:46.322298   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322303   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0311 20:58:46.322309   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322318   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322327   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0311 20:58:46.322337   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0311 20:58:46.322342   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322346   43208 command_runner.go:130] >       "size": "1363676",
	I0311 20:58:46.322352   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322356   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322362   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322366   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322372   43208 command_runner.go:130] >     },
	I0311 20:58:46.322376   43208 command_runner.go:130] >     {
	I0311 20:58:46.322384   43208 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0311 20:58:46.322388   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322396   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0311 20:58:46.322399   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322405   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322412   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0311 20:58:46.322426   43208 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0311 20:58:46.322432   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322436   43208 command_runner.go:130] >       "size": "31470524",
	I0311 20:58:46.322442   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322446   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322452   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322456   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322461   43208 command_runner.go:130] >     },
	I0311 20:58:46.322465   43208 command_runner.go:130] >     {
	I0311 20:58:46.322473   43208 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0311 20:58:46.322478   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322485   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0311 20:58:46.322488   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322495   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322502   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0311 20:58:46.322511   43208 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0311 20:58:46.322517   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322521   43208 command_runner.go:130] >       "size": "53621675",
	I0311 20:58:46.322527   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322535   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322541   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322545   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322551   43208 command_runner.go:130] >     },
	I0311 20:58:46.322554   43208 command_runner.go:130] >     {
	I0311 20:58:46.322563   43208 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0311 20:58:46.322569   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322574   43208 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0311 20:58:46.322579   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322584   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322593   43208 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0311 20:58:46.322601   43208 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0311 20:58:46.322607   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322611   43208 command_runner.go:130] >       "size": "295456551",
	I0311 20:58:46.322617   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.322621   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.322627   43208 command_runner.go:130] >       },
	I0311 20:58:46.322630   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322636   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322642   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322647   43208 command_runner.go:130] >     },
	I0311 20:58:46.322650   43208 command_runner.go:130] >     {
	I0311 20:58:46.322656   43208 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0311 20:58:46.322662   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322667   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0311 20:58:46.322673   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322677   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322686   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0311 20:58:46.322695   43208 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0311 20:58:46.322701   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322705   43208 command_runner.go:130] >       "size": "127226832",
	I0311 20:58:46.322711   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.322716   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.322720   43208 command_runner.go:130] >       },
	I0311 20:58:46.322726   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322730   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322741   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322746   43208 command_runner.go:130] >     },
	I0311 20:58:46.322750   43208 command_runner.go:130] >     {
	I0311 20:58:46.322758   43208 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0311 20:58:46.322765   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322770   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0311 20:58:46.322775   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322780   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322825   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0311 20:58:46.322838   43208 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0311 20:58:46.322842   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322846   43208 command_runner.go:130] >       "size": "123261750",
	I0311 20:58:46.322851   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.322860   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.322869   43208 command_runner.go:130] >       },
	I0311 20:58:46.322879   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322886   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.322890   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.322896   43208 command_runner.go:130] >     },
	I0311 20:58:46.322899   43208 command_runner.go:130] >     {
	I0311 20:58:46.322908   43208 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0311 20:58:46.322914   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.322919   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0311 20:58:46.322925   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322929   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.322939   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0311 20:58:46.322953   43208 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0311 20:58:46.322962   43208 command_runner.go:130] >       ],
	I0311 20:58:46.322969   43208 command_runner.go:130] >       "size": "74749335",
	I0311 20:58:46.322979   43208 command_runner.go:130] >       "uid": null,
	I0311 20:58:46.322989   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.322998   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.323002   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.323006   43208 command_runner.go:130] >     },
	I0311 20:58:46.323012   43208 command_runner.go:130] >     {
	I0311 20:58:46.323018   43208 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0311 20:58:46.323028   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.323036   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0311 20:58:46.323042   43208 command_runner.go:130] >       ],
	I0311 20:58:46.323048   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.323063   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0311 20:58:46.323078   43208 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0311 20:58:46.323087   43208 command_runner.go:130] >       ],
	I0311 20:58:46.323099   43208 command_runner.go:130] >       "size": "61551410",
	I0311 20:58:46.323108   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.323115   43208 command_runner.go:130] >         "value": "0"
	I0311 20:58:46.323119   43208 command_runner.go:130] >       },
	I0311 20:58:46.323122   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.323129   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.323133   43208 command_runner.go:130] >       "pinned": false
	I0311 20:58:46.323139   43208 command_runner.go:130] >     },
	I0311 20:58:46.323143   43208 command_runner.go:130] >     {
	I0311 20:58:46.323153   43208 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0311 20:58:46.323162   43208 command_runner.go:130] >       "repoTags": [
	I0311 20:58:46.323173   43208 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0311 20:58:46.323179   43208 command_runner.go:130] >       ],
	I0311 20:58:46.323193   43208 command_runner.go:130] >       "repoDigests": [
	I0311 20:58:46.323207   43208 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0311 20:58:46.323221   43208 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0311 20:58:46.323230   43208 command_runner.go:130] >       ],
	I0311 20:58:46.323238   43208 command_runner.go:130] >       "size": "750414",
	I0311 20:58:46.323242   43208 command_runner.go:130] >       "uid": {
	I0311 20:58:46.323246   43208 command_runner.go:130] >         "value": "65535"
	I0311 20:58:46.323250   43208 command_runner.go:130] >       },
	I0311 20:58:46.323259   43208 command_runner.go:130] >       "username": "",
	I0311 20:58:46.323268   43208 command_runner.go:130] >       "spec": null,
	I0311 20:58:46.323278   43208 command_runner.go:130] >       "pinned": true
	I0311 20:58:46.323286   43208 command_runner.go:130] >     }
	I0311 20:58:46.323294   43208 command_runner.go:130] >   ]
	I0311 20:58:46.323299   43208 command_runner.go:130] > }
	I0311 20:58:46.323449   43208 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 20:58:46.323464   43208 cache_images.go:84] Images are preloaded, skipping loading
	I0311 20:58:46.323472   43208 kubeadm.go:928] updating node { 192.168.39.134 8443 v1.28.4 crio true true} ...
	I0311 20:58:46.323584   43208 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-232100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-232100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 20:58:46.323667   43208 ssh_runner.go:195] Run: crio config
	I0311 20:58:46.363856   43208 command_runner.go:130] ! time="2024-03-11 20:58:46.344746966Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0311 20:58:46.369273   43208 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0311 20:58:46.380323   43208 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0311 20:58:46.380343   43208 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0311 20:58:46.380353   43208 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0311 20:58:46.380358   43208 command_runner.go:130] > #
	I0311 20:58:46.380373   43208 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0311 20:58:46.380385   43208 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0311 20:58:46.380393   43208 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0311 20:58:46.380400   43208 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0311 20:58:46.380406   43208 command_runner.go:130] > # reload'.
	I0311 20:58:46.380413   43208 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0311 20:58:46.380421   43208 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0311 20:58:46.380428   43208 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0311 20:58:46.380436   43208 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0311 20:58:46.380442   43208 command_runner.go:130] > [crio]
	I0311 20:58:46.380447   43208 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0311 20:58:46.380454   43208 command_runner.go:130] > # containers images, in this directory.
	I0311 20:58:46.380459   43208 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0311 20:58:46.380470   43208 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0311 20:58:46.380478   43208 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0311 20:58:46.380485   43208 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0311 20:58:46.380491   43208 command_runner.go:130] > # imagestore = ""
	I0311 20:58:46.380497   43208 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0311 20:58:46.380506   43208 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0311 20:58:46.380517   43208 command_runner.go:130] > storage_driver = "overlay"
	I0311 20:58:46.380524   43208 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0311 20:58:46.380533   43208 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0311 20:58:46.380537   43208 command_runner.go:130] > storage_option = [
	I0311 20:58:46.380544   43208 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0311 20:58:46.380547   43208 command_runner.go:130] > ]
	I0311 20:58:46.380553   43208 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0311 20:58:46.380561   43208 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0311 20:58:46.380568   43208 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0311 20:58:46.380574   43208 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0311 20:58:46.380581   43208 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0311 20:58:46.380586   43208 command_runner.go:130] > # always happen on a node reboot
	I0311 20:58:46.380593   43208 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0311 20:58:46.380603   43208 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0311 20:58:46.380612   43208 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0311 20:58:46.380620   43208 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0311 20:58:46.380627   43208 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0311 20:58:46.380634   43208 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0311 20:58:46.380644   43208 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0311 20:58:46.380650   43208 command_runner.go:130] > # internal_wipe = true
	I0311 20:58:46.380657   43208 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0311 20:58:46.380665   43208 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0311 20:58:46.380672   43208 command_runner.go:130] > # internal_repair = false
	I0311 20:58:46.380678   43208 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0311 20:58:46.380691   43208 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0311 20:58:46.380699   43208 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0311 20:58:46.380705   43208 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0311 20:58:46.380713   43208 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0311 20:58:46.380719   43208 command_runner.go:130] > [crio.api]
	I0311 20:58:46.380725   43208 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0311 20:58:46.380732   43208 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0311 20:58:46.380759   43208 command_runner.go:130] > # IP address on which the stream server will listen.
	I0311 20:58:46.380766   43208 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0311 20:58:46.380773   43208 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0311 20:58:46.380780   43208 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0311 20:58:46.380785   43208 command_runner.go:130] > # stream_port = "0"
	I0311 20:58:46.380800   43208 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0311 20:58:46.380806   43208 command_runner.go:130] > # stream_enable_tls = false
	I0311 20:58:46.380812   43208 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0311 20:58:46.380818   43208 command_runner.go:130] > # stream_idle_timeout = ""
	I0311 20:58:46.380824   43208 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0311 20:58:46.380833   43208 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0311 20:58:46.380836   43208 command_runner.go:130] > # minutes.
	I0311 20:58:46.380843   43208 command_runner.go:130] > # stream_tls_cert = ""
	I0311 20:58:46.380848   43208 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0311 20:58:46.380854   43208 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0311 20:58:46.380860   43208 command_runner.go:130] > # stream_tls_key = ""
	I0311 20:58:46.380865   43208 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0311 20:58:46.380873   43208 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0311 20:58:46.380893   43208 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0311 20:58:46.380899   43208 command_runner.go:130] > # stream_tls_ca = ""
	I0311 20:58:46.380906   43208 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0311 20:58:46.380913   43208 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0311 20:58:46.380920   43208 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0311 20:58:46.380926   43208 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0311 20:58:46.380932   43208 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0311 20:58:46.380940   43208 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0311 20:58:46.380946   43208 command_runner.go:130] > [crio.runtime]
	I0311 20:58:46.380952   43208 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0311 20:58:46.380960   43208 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0311 20:58:46.380966   43208 command_runner.go:130] > # "nofile=1024:2048"
	I0311 20:58:46.380973   43208 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0311 20:58:46.380980   43208 command_runner.go:130] > # default_ulimits = [
	I0311 20:58:46.380983   43208 command_runner.go:130] > # ]
	I0311 20:58:46.380989   43208 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0311 20:58:46.380995   43208 command_runner.go:130] > # no_pivot = false
	I0311 20:58:46.381001   43208 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0311 20:58:46.381009   43208 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0311 20:58:46.381016   43208 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0311 20:58:46.381025   43208 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0311 20:58:46.381030   43208 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0311 20:58:46.381038   43208 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0311 20:58:46.381050   43208 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0311 20:58:46.381057   43208 command_runner.go:130] > # Cgroup setting for conmon
	I0311 20:58:46.381063   43208 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0311 20:58:46.381070   43208 command_runner.go:130] > conmon_cgroup = "pod"
	I0311 20:58:46.381076   43208 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0311 20:58:46.381083   43208 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0311 20:58:46.381090   43208 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0311 20:58:46.381096   43208 command_runner.go:130] > conmon_env = [
	I0311 20:58:46.381101   43208 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0311 20:58:46.381109   43208 command_runner.go:130] > ]
	I0311 20:58:46.381114   43208 command_runner.go:130] > # Additional environment variables to set for all the
	I0311 20:58:46.381119   43208 command_runner.go:130] > # containers. These are overridden if set in the
	I0311 20:58:46.381127   43208 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0311 20:58:46.381131   43208 command_runner.go:130] > # default_env = [
	I0311 20:58:46.381134   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381139   43208 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0311 20:58:46.381146   43208 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0311 20:58:46.381150   43208 command_runner.go:130] > # selinux = false
	I0311 20:58:46.381156   43208 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0311 20:58:46.381161   43208 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0311 20:58:46.381169   43208 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0311 20:58:46.381173   43208 command_runner.go:130] > # seccomp_profile = ""
	I0311 20:58:46.381181   43208 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0311 20:58:46.381186   43208 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0311 20:58:46.381194   43208 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0311 20:58:46.381201   43208 command_runner.go:130] > # which might increase security.
	I0311 20:58:46.381205   43208 command_runner.go:130] > # This option is currently deprecated,
	I0311 20:58:46.381214   43208 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0311 20:58:46.381221   43208 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0311 20:58:46.381227   43208 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0311 20:58:46.381236   43208 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0311 20:58:46.381242   43208 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0311 20:58:46.381250   43208 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0311 20:58:46.381257   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.381262   43208 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0311 20:58:46.381270   43208 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0311 20:58:46.381279   43208 command_runner.go:130] > # the cgroup blockio controller.
	I0311 20:58:46.381285   43208 command_runner.go:130] > # blockio_config_file = ""
	I0311 20:58:46.381292   43208 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0311 20:58:46.381298   43208 command_runner.go:130] > # blockio parameters.
	I0311 20:58:46.381301   43208 command_runner.go:130] > # blockio_reload = false
	I0311 20:58:46.381310   43208 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0311 20:58:46.381316   43208 command_runner.go:130] > # irqbalance daemon.
	I0311 20:58:46.381321   43208 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0311 20:58:46.381329   43208 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0311 20:58:46.381338   43208 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0311 20:58:46.381344   43208 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0311 20:58:46.381352   43208 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0311 20:58:46.381358   43208 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0311 20:58:46.381365   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.381369   43208 command_runner.go:130] > # rdt_config_file = ""
	I0311 20:58:46.381376   43208 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0311 20:58:46.381380   43208 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0311 20:58:46.381409   43208 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0311 20:58:46.381418   43208 command_runner.go:130] > # separate_pull_cgroup = ""
	I0311 20:58:46.381423   43208 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0311 20:58:46.381429   43208 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0311 20:58:46.381434   43208 command_runner.go:130] > # will be added.
	I0311 20:58:46.381438   43208 command_runner.go:130] > # default_capabilities = [
	I0311 20:58:46.381445   43208 command_runner.go:130] > # 	"CHOWN",
	I0311 20:58:46.381448   43208 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0311 20:58:46.381454   43208 command_runner.go:130] > # 	"FSETID",
	I0311 20:58:46.381458   43208 command_runner.go:130] > # 	"FOWNER",
	I0311 20:58:46.381462   43208 command_runner.go:130] > # 	"SETGID",
	I0311 20:58:46.381465   43208 command_runner.go:130] > # 	"SETUID",
	I0311 20:58:46.381469   43208 command_runner.go:130] > # 	"SETPCAP",
	I0311 20:58:46.381475   43208 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0311 20:58:46.381479   43208 command_runner.go:130] > # 	"KILL",
	I0311 20:58:46.381484   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381492   43208 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0311 20:58:46.381500   43208 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0311 20:58:46.381507   43208 command_runner.go:130] > # add_inheritable_capabilities = false
	I0311 20:58:46.381518   43208 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0311 20:58:46.381527   43208 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0311 20:58:46.381533   43208 command_runner.go:130] > # default_sysctls = [
	I0311 20:58:46.381536   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381541   43208 command_runner.go:130] > # List of devices on the host that a
	I0311 20:58:46.381548   43208 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0311 20:58:46.381555   43208 command_runner.go:130] > # allowed_devices = [
	I0311 20:58:46.381558   43208 command_runner.go:130] > # 	"/dev/fuse",
	I0311 20:58:46.381564   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381568   43208 command_runner.go:130] > # List of additional devices. specified as
	I0311 20:58:46.381577   43208 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0311 20:58:46.381585   43208 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0311 20:58:46.381593   43208 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0311 20:58:46.381597   43208 command_runner.go:130] > # additional_devices = [
	I0311 20:58:46.381603   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381608   43208 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0311 20:58:46.381615   43208 command_runner.go:130] > # cdi_spec_dirs = [
	I0311 20:58:46.381619   43208 command_runner.go:130] > # 	"/etc/cdi",
	I0311 20:58:46.381625   43208 command_runner.go:130] > # 	"/var/run/cdi",
	I0311 20:58:46.381628   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381636   43208 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0311 20:58:46.381643   43208 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0311 20:58:46.381650   43208 command_runner.go:130] > # Defaults to false.
	I0311 20:58:46.381655   43208 command_runner.go:130] > # device_ownership_from_security_context = false
	I0311 20:58:46.381663   43208 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0311 20:58:46.381671   43208 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0311 20:58:46.381677   43208 command_runner.go:130] > # hooks_dir = [
	I0311 20:58:46.381681   43208 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0311 20:58:46.381691   43208 command_runner.go:130] > # ]
	I0311 20:58:46.381699   43208 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0311 20:58:46.381707   43208 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0311 20:58:46.381715   43208 command_runner.go:130] > # its default mounts from the following two files:
	I0311 20:58:46.381720   43208 command_runner.go:130] > #
	I0311 20:58:46.381726   43208 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0311 20:58:46.381735   43208 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0311 20:58:46.381743   43208 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0311 20:58:46.381752   43208 command_runner.go:130] > #
	I0311 20:58:46.381761   43208 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0311 20:58:46.381767   43208 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0311 20:58:46.381776   43208 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0311 20:58:46.381783   43208 command_runner.go:130] > #      only add mounts it finds in this file.
	I0311 20:58:46.381786   43208 command_runner.go:130] > #
	I0311 20:58:46.381793   43208 command_runner.go:130] > # default_mounts_file = ""
	I0311 20:58:46.381798   43208 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0311 20:58:46.381806   43208 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0311 20:58:46.381813   43208 command_runner.go:130] > pids_limit = 1024
	I0311 20:58:46.381819   43208 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0311 20:58:46.381827   43208 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0311 20:58:46.381833   43208 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0311 20:58:46.381843   43208 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0311 20:58:46.381849   43208 command_runner.go:130] > # log_size_max = -1
	I0311 20:58:46.381856   43208 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0311 20:58:46.381863   43208 command_runner.go:130] > # log_to_journald = false
	I0311 20:58:46.381869   43208 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0311 20:58:46.381876   43208 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0311 20:58:46.381884   43208 command_runner.go:130] > # Path to directory for container attach sockets.
	I0311 20:58:46.381889   43208 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0311 20:58:46.381896   43208 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0311 20:58:46.381900   43208 command_runner.go:130] > # bind_mount_prefix = ""
	I0311 20:58:46.381906   43208 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0311 20:58:46.381912   43208 command_runner.go:130] > # read_only = false
	I0311 20:58:46.381918   43208 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0311 20:58:46.381927   43208 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0311 20:58:46.381933   43208 command_runner.go:130] > # live configuration reload.
	I0311 20:58:46.381937   43208 command_runner.go:130] > # log_level = "info"
	I0311 20:58:46.381945   43208 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0311 20:58:46.381952   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.381956   43208 command_runner.go:130] > # log_filter = ""
	I0311 20:58:46.381965   43208 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0311 20:58:46.381974   43208 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0311 20:58:46.381979   43208 command_runner.go:130] > # separated by comma.
	I0311 20:58:46.381987   43208 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0311 20:58:46.381997   43208 command_runner.go:130] > # uid_mappings = ""
	I0311 20:58:46.382005   43208 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0311 20:58:46.382013   43208 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0311 20:58:46.382020   43208 command_runner.go:130] > # separated by comma.
	I0311 20:58:46.382027   43208 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0311 20:58:46.382033   43208 command_runner.go:130] > # gid_mappings = ""
	I0311 20:58:46.382039   43208 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0311 20:58:46.382046   43208 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0311 20:58:46.382054   43208 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0311 20:58:46.382061   43208 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0311 20:58:46.382068   43208 command_runner.go:130] > # minimum_mappable_uid = -1
	I0311 20:58:46.382074   43208 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0311 20:58:46.382084   43208 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0311 20:58:46.382092   43208 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0311 20:58:46.382101   43208 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0311 20:58:46.382108   43208 command_runner.go:130] > # minimum_mappable_gid = -1
	I0311 20:58:46.382113   43208 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0311 20:58:46.382122   43208 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0311 20:58:46.382130   43208 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0311 20:58:46.382134   43208 command_runner.go:130] > # ctr_stop_timeout = 30
	I0311 20:58:46.382142   43208 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0311 20:58:46.382150   43208 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0311 20:58:46.382154   43208 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0311 20:58:46.382159   43208 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0311 20:58:46.382165   43208 command_runner.go:130] > drop_infra_ctr = false
	I0311 20:58:46.382171   43208 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0311 20:58:46.382179   43208 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0311 20:58:46.382189   43208 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0311 20:58:46.382195   43208 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0311 20:58:46.382202   43208 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0311 20:58:46.382209   43208 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0311 20:58:46.382215   43208 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0311 20:58:46.382221   43208 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0311 20:58:46.382225   43208 command_runner.go:130] > # shared_cpuset = ""
	I0311 20:58:46.382233   43208 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0311 20:58:46.382241   43208 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0311 20:58:46.382249   43208 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0311 20:58:46.382258   43208 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0311 20:58:46.382264   43208 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0311 20:58:46.382270   43208 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0311 20:58:46.382278   43208 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0311 20:58:46.382283   43208 command_runner.go:130] > # enable_criu_support = false
	I0311 20:58:46.382287   43208 command_runner.go:130] > # Enable/disable the generation of the container,
	I0311 20:58:46.382296   43208 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0311 20:58:46.382303   43208 command_runner.go:130] > # enable_pod_events = false
	I0311 20:58:46.382308   43208 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0311 20:58:46.382316   43208 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0311 20:58:46.382322   43208 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0311 20:58:46.382329   43208 command_runner.go:130] > # default_runtime = "runc"
	I0311 20:58:46.382338   43208 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0311 20:58:46.382348   43208 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0311 20:58:46.382358   43208 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0311 20:58:46.382366   43208 command_runner.go:130] > # creation as a file is not desired either.
	I0311 20:58:46.382373   43208 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0311 20:58:46.382380   43208 command_runner.go:130] > # the hostname is being managed dynamically.
	I0311 20:58:46.382384   43208 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0311 20:58:46.382390   43208 command_runner.go:130] > # ]
	I0311 20:58:46.382396   43208 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0311 20:58:46.382404   43208 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0311 20:58:46.382410   43208 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0311 20:58:46.382417   43208 command_runner.go:130] > # Each entry in the table should follow the format:
	I0311 20:58:46.382420   43208 command_runner.go:130] > #
	I0311 20:58:46.382425   43208 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0311 20:58:46.382432   43208 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0311 20:58:46.382436   43208 command_runner.go:130] > # runtime_type = "oci"
	I0311 20:58:46.382499   43208 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0311 20:58:46.382510   43208 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0311 20:58:46.382520   43208 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0311 20:58:46.382526   43208 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0311 20:58:46.382533   43208 command_runner.go:130] > # monitor_env = []
	I0311 20:58:46.382537   43208 command_runner.go:130] > # privileged_without_host_devices = false
	I0311 20:58:46.382544   43208 command_runner.go:130] > # allowed_annotations = []
	I0311 20:58:46.382556   43208 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0311 20:58:46.382563   43208 command_runner.go:130] > # Where:
	I0311 20:58:46.382568   43208 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0311 20:58:46.382577   43208 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0311 20:58:46.382585   43208 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0311 20:58:46.382593   43208 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0311 20:58:46.382599   43208 command_runner.go:130] > #   in $PATH.
	I0311 20:58:46.382605   43208 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0311 20:58:46.382612   43208 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0311 20:58:46.382620   43208 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0311 20:58:46.382626   43208 command_runner.go:130] > #   state.
	I0311 20:58:46.382632   43208 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0311 20:58:46.382640   43208 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0311 20:58:46.382649   43208 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0311 20:58:46.382656   43208 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0311 20:58:46.382664   43208 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0311 20:58:46.382673   43208 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0311 20:58:46.382680   43208 command_runner.go:130] > #   The currently recognized values are:
	I0311 20:58:46.382691   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0311 20:58:46.382700   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0311 20:58:46.382708   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0311 20:58:46.382716   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0311 20:58:46.382726   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0311 20:58:46.382735   43208 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0311 20:58:46.382744   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0311 20:58:46.382752   43208 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0311 20:58:46.382760   43208 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0311 20:58:46.382765   43208 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0311 20:58:46.382772   43208 command_runner.go:130] > #   deprecated option "conmon".
	I0311 20:58:46.382778   43208 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0311 20:58:46.382786   43208 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0311 20:58:46.382792   43208 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0311 20:58:46.382799   43208 command_runner.go:130] > #   should be moved to the container's cgroup
	I0311 20:58:46.382805   43208 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0311 20:58:46.382812   43208 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0311 20:58:46.382818   43208 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0311 20:58:46.382830   43208 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0311 20:58:46.382835   43208 command_runner.go:130] > #
	I0311 20:58:46.382840   43208 command_runner.go:130] > # Using the seccomp notifier feature:
	I0311 20:58:46.382846   43208 command_runner.go:130] > #
	I0311 20:58:46.382851   43208 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0311 20:58:46.382859   43208 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0311 20:58:46.382862   43208 command_runner.go:130] > #
	I0311 20:58:46.382870   43208 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0311 20:58:46.382876   43208 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0311 20:58:46.382882   43208 command_runner.go:130] > #
	I0311 20:58:46.382887   43208 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0311 20:58:46.382893   43208 command_runner.go:130] > # feature.
	I0311 20:58:46.382897   43208 command_runner.go:130] > #
	I0311 20:58:46.382905   43208 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0311 20:58:46.382911   43208 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0311 20:58:46.382919   43208 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0311 20:58:46.382927   43208 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0311 20:58:46.382934   43208 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0311 20:58:46.382939   43208 command_runner.go:130] > #
	I0311 20:58:46.382944   43208 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0311 20:58:46.382952   43208 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0311 20:58:46.382955   43208 command_runner.go:130] > #
	I0311 20:58:46.382963   43208 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0311 20:58:46.382968   43208 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0311 20:58:46.382974   43208 command_runner.go:130] > #
	I0311 20:58:46.382980   43208 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0311 20:58:46.382988   43208 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0311 20:58:46.382994   43208 command_runner.go:130] > # limitation.
	I0311 20:58:46.382998   43208 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0311 20:58:46.383005   43208 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0311 20:58:46.383009   43208 command_runner.go:130] > runtime_type = "oci"
	I0311 20:58:46.383015   43208 command_runner.go:130] > runtime_root = "/run/runc"
	I0311 20:58:46.383020   43208 command_runner.go:130] > runtime_config_path = ""
	I0311 20:58:46.383027   43208 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0311 20:58:46.383030   43208 command_runner.go:130] > monitor_cgroup = "pod"
	I0311 20:58:46.383037   43208 command_runner.go:130] > monitor_exec_cgroup = ""
	I0311 20:58:46.383044   43208 command_runner.go:130] > monitor_env = [
	I0311 20:58:46.383052   43208 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0311 20:58:46.383058   43208 command_runner.go:130] > ]
	I0311 20:58:46.383062   43208 command_runner.go:130] > privileged_without_host_devices = false
	I0311 20:58:46.383071   43208 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0311 20:58:46.383078   43208 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0311 20:58:46.383084   43208 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0311 20:58:46.383093   43208 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0311 20:58:46.383103   43208 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0311 20:58:46.383111   43208 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0311 20:58:46.383122   43208 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0311 20:58:46.383132   43208 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0311 20:58:46.383139   43208 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0311 20:58:46.383149   43208 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0311 20:58:46.383153   43208 command_runner.go:130] > # Example:
	I0311 20:58:46.383161   43208 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0311 20:58:46.383166   43208 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0311 20:58:46.383173   43208 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0311 20:58:46.383178   43208 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0311 20:58:46.383181   43208 command_runner.go:130] > # cpuset = 0
	I0311 20:58:46.383185   43208 command_runner.go:130] > # cpushares = "0-1"
	I0311 20:58:46.383188   43208 command_runner.go:130] > # Where:
	I0311 20:58:46.383192   43208 command_runner.go:130] > # The workload name is workload-type.
	I0311 20:58:46.383198   43208 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0311 20:58:46.383202   43208 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0311 20:58:46.383207   43208 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0311 20:58:46.383214   43208 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0311 20:58:46.383219   43208 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0311 20:58:46.383223   43208 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0311 20:58:46.383229   43208 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0311 20:58:46.383233   43208 command_runner.go:130] > # Default value is set to true
	I0311 20:58:46.383237   43208 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0311 20:58:46.383242   43208 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0311 20:58:46.383246   43208 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0311 20:58:46.383250   43208 command_runner.go:130] > # Default value is set to 'false'
	I0311 20:58:46.383254   43208 command_runner.go:130] > # disable_hostport_mapping = false
	I0311 20:58:46.383264   43208 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0311 20:58:46.383267   43208 command_runner.go:130] > #
	I0311 20:58:46.383272   43208 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0311 20:58:46.383278   43208 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0311 20:58:46.383283   43208 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0311 20:58:46.383289   43208 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0311 20:58:46.383293   43208 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0311 20:58:46.383297   43208 command_runner.go:130] > [crio.image]
	I0311 20:58:46.383302   43208 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0311 20:58:46.383306   43208 command_runner.go:130] > # default_transport = "docker://"
	I0311 20:58:46.383311   43208 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0311 20:58:46.383317   43208 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0311 20:58:46.383321   43208 command_runner.go:130] > # global_auth_file = ""
	I0311 20:58:46.383325   43208 command_runner.go:130] > # The image used to instantiate infra containers.
	I0311 20:58:46.383330   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.383334   43208 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0311 20:58:46.383340   43208 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0311 20:58:46.383345   43208 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0311 20:58:46.383353   43208 command_runner.go:130] > # This option supports live configuration reload.
	I0311 20:58:46.383357   43208 command_runner.go:130] > # pause_image_auth_file = ""
	I0311 20:58:46.383365   43208 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0311 20:58:46.383373   43208 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0311 20:58:46.383379   43208 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0311 20:58:46.383386   43208 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0311 20:58:46.383390   43208 command_runner.go:130] > # pause_command = "/pause"
	I0311 20:58:46.383397   43208 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0311 20:58:46.383406   43208 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0311 20:58:46.383412   43208 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0311 20:58:46.383420   43208 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0311 20:58:46.383427   43208 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0311 20:58:46.383436   43208 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0311 20:58:46.383441   43208 command_runner.go:130] > # pinned_images = [
	I0311 20:58:46.383445   43208 command_runner.go:130] > # ]
	I0311 20:58:46.383453   43208 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0311 20:58:46.383461   43208 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0311 20:58:46.383467   43208 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0311 20:58:46.383479   43208 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0311 20:58:46.383487   43208 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0311 20:58:46.383493   43208 command_runner.go:130] > # signature_policy = ""
	I0311 20:58:46.383499   43208 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0311 20:58:46.383508   43208 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0311 20:58:46.383516   43208 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0311 20:58:46.383524   43208 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0311 20:58:46.383530   43208 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0311 20:58:46.383538   43208 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0311 20:58:46.383543   43208 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0311 20:58:46.383551   43208 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0311 20:58:46.383558   43208 command_runner.go:130] > # changing them here.
	I0311 20:58:46.383564   43208 command_runner.go:130] > # insecure_registries = [
	I0311 20:58:46.383568   43208 command_runner.go:130] > # ]
	I0311 20:58:46.383576   43208 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0311 20:58:46.383584   43208 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0311 20:58:46.383588   43208 command_runner.go:130] > # image_volumes = "mkdir"
	I0311 20:58:46.383595   43208 command_runner.go:130] > # Temporary directory to use for storing big files
	I0311 20:58:46.383599   43208 command_runner.go:130] > # big_files_temporary_dir = ""
	I0311 20:58:46.383607   43208 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0311 20:58:46.383611   43208 command_runner.go:130] > # CNI plugins.
	I0311 20:58:46.383615   43208 command_runner.go:130] > [crio.network]
	I0311 20:58:46.383623   43208 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0311 20:58:46.383628   43208 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0311 20:58:46.383635   43208 command_runner.go:130] > # cni_default_network = ""
	I0311 20:58:46.383640   43208 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0311 20:58:46.383646   43208 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0311 20:58:46.383652   43208 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0311 20:58:46.383658   43208 command_runner.go:130] > # plugin_dirs = [
	I0311 20:58:46.383662   43208 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0311 20:58:46.383667   43208 command_runner.go:130] > # ]
	I0311 20:58:46.383673   43208 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0311 20:58:46.383679   43208 command_runner.go:130] > [crio.metrics]
	I0311 20:58:46.383684   43208 command_runner.go:130] > # Globally enable or disable metrics support.
	I0311 20:58:46.383692   43208 command_runner.go:130] > enable_metrics = true
	I0311 20:58:46.383696   43208 command_runner.go:130] > # Specify enabled metrics collectors.
	I0311 20:58:46.383710   43208 command_runner.go:130] > # Per default all metrics are enabled.
	I0311 20:58:46.383718   43208 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0311 20:58:46.383726   43208 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0311 20:58:46.383734   43208 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0311 20:58:46.383738   43208 command_runner.go:130] > # metrics_collectors = [
	I0311 20:58:46.383744   43208 command_runner.go:130] > # 	"operations",
	I0311 20:58:46.383748   43208 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0311 20:58:46.383755   43208 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0311 20:58:46.383759   43208 command_runner.go:130] > # 	"operations_errors",
	I0311 20:58:46.383766   43208 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0311 20:58:46.383770   43208 command_runner.go:130] > # 	"image_pulls_by_name",
	I0311 20:58:46.383777   43208 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0311 20:58:46.383781   43208 command_runner.go:130] > # 	"image_pulls_failures",
	I0311 20:58:46.383785   43208 command_runner.go:130] > # 	"image_pulls_successes",
	I0311 20:58:46.383790   43208 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0311 20:58:46.383794   43208 command_runner.go:130] > # 	"image_layer_reuse",
	I0311 20:58:46.383800   43208 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0311 20:58:46.383804   43208 command_runner.go:130] > # 	"containers_oom_total",
	I0311 20:58:46.383810   43208 command_runner.go:130] > # 	"containers_oom",
	I0311 20:58:46.383815   43208 command_runner.go:130] > # 	"processes_defunct",
	I0311 20:58:46.383834   43208 command_runner.go:130] > # 	"operations_total",
	I0311 20:58:46.383838   43208 command_runner.go:130] > # 	"operations_latency_seconds",
	I0311 20:58:46.383845   43208 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0311 20:58:46.383849   43208 command_runner.go:130] > # 	"operations_errors_total",
	I0311 20:58:46.383856   43208 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0311 20:58:46.383861   43208 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0311 20:58:46.383867   43208 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0311 20:58:46.383871   43208 command_runner.go:130] > # 	"image_pulls_success_total",
	I0311 20:58:46.383877   43208 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0311 20:58:46.383881   43208 command_runner.go:130] > # 	"containers_oom_count_total",
	I0311 20:58:46.383886   43208 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0311 20:58:46.383892   43208 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0311 20:58:46.383896   43208 command_runner.go:130] > # ]
	I0311 20:58:46.383904   43208 command_runner.go:130] > # The port on which the metrics server will listen.
	I0311 20:58:46.383908   43208 command_runner.go:130] > # metrics_port = 9090
	I0311 20:58:46.383915   43208 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0311 20:58:46.383924   43208 command_runner.go:130] > # metrics_socket = ""
	I0311 20:58:46.383931   43208 command_runner.go:130] > # The certificate for the secure metrics server.
	I0311 20:58:46.383937   43208 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0311 20:58:46.383947   43208 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0311 20:58:46.383954   43208 command_runner.go:130] > # certificate on any modification event.
	I0311 20:58:46.383957   43208 command_runner.go:130] > # metrics_cert = ""
	I0311 20:58:46.383965   43208 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0311 20:58:46.383969   43208 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0311 20:58:46.383975   43208 command_runner.go:130] > # metrics_key = ""
	I0311 20:58:46.383981   43208 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0311 20:58:46.383987   43208 command_runner.go:130] > [crio.tracing]
	I0311 20:58:46.383992   43208 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0311 20:58:46.383999   43208 command_runner.go:130] > # enable_tracing = false
	I0311 20:58:46.384008   43208 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0311 20:58:46.384015   43208 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0311 20:58:46.384021   43208 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0311 20:58:46.384029   43208 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0311 20:58:46.384033   43208 command_runner.go:130] > # CRI-O NRI configuration.
	I0311 20:58:46.384038   43208 command_runner.go:130] > [crio.nri]
	I0311 20:58:46.384043   43208 command_runner.go:130] > # Globally enable or disable NRI.
	I0311 20:58:46.384048   43208 command_runner.go:130] > # enable_nri = false
	I0311 20:58:46.384052   43208 command_runner.go:130] > # NRI socket to listen on.
	I0311 20:58:46.384056   43208 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0311 20:58:46.384063   43208 command_runner.go:130] > # NRI plugin directory to use.
	I0311 20:58:46.384068   43208 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0311 20:58:46.384075   43208 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0311 20:58:46.384079   43208 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0311 20:58:46.384087   43208 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0311 20:58:46.384091   43208 command_runner.go:130] > # nri_disable_connections = false
	I0311 20:58:46.384098   43208 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0311 20:58:46.384103   43208 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0311 20:58:46.384114   43208 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0311 20:58:46.384121   43208 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0311 20:58:46.384127   43208 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0311 20:58:46.384134   43208 command_runner.go:130] > [crio.stats]
	I0311 20:58:46.384139   43208 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0311 20:58:46.384151   43208 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0311 20:58:46.384158   43208 command_runner.go:130] > # stats_collection_period = 0
	I0311 20:58:46.384306   43208 cni.go:84] Creating CNI manager for ""
	I0311 20:58:46.384319   43208 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0311 20:58:46.384328   43208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 20:58:46.384345   43208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-232100 NodeName:multinode-232100 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 20:58:46.384460   43208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-232100"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 20:58:46.384519   43208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 20:58:46.395037   43208 command_runner.go:130] > kubeadm
	I0311 20:58:46.395055   43208 command_runner.go:130] > kubectl
	I0311 20:58:46.395060   43208 command_runner.go:130] > kubelet
	I0311 20:58:46.395352   43208 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 20:58:46.395403   43208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 20:58:46.405233   43208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0311 20:58:46.423263   43208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 20:58:46.441969   43208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0311 20:58:46.459603   43208 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0311 20:58:46.463430   43208 command_runner.go:130] > 192.168.39.134	control-plane.minikube.internal
	I0311 20:58:46.463596   43208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 20:58:46.606718   43208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 20:58:46.623414   43208 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100 for IP: 192.168.39.134
	I0311 20:58:46.623435   43208 certs.go:194] generating shared ca certs ...
	I0311 20:58:46.623454   43208 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 20:58:46.623599   43208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 20:58:46.623673   43208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 20:58:46.623688   43208 certs.go:256] generating profile certs ...
	I0311 20:58:46.623855   43208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/client.key
	I0311 20:58:46.623987   43208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.key.81468c01
	I0311 20:58:46.624089   43208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.key
	I0311 20:58:46.624107   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0311 20:58:46.624128   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0311 20:58:46.624148   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0311 20:58:46.624173   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0311 20:58:46.624203   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0311 20:58:46.624226   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0311 20:58:46.624256   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0311 20:58:46.624309   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0311 20:58:46.624383   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 20:58:46.624432   43208 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 20:58:46.624447   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 20:58:46.624482   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 20:58:46.624523   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 20:58:46.624558   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 20:58:46.624624   43208 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 20:58:46.624667   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> /usr/share/ca-certificates/182352.pem
	I0311 20:58:46.624693   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:46.624725   43208 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem -> /usr/share/ca-certificates/18235.pem
	I0311 20:58:46.625393   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 20:58:46.653405   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 20:58:46.681479   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 20:58:46.710610   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 20:58:46.739152   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 20:58:46.767216   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 20:58:46.795692   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 20:58:46.822161   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/multinode-232100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 20:58:46.848177   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 20:58:46.873810   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 20:58:46.924491   43208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 20:58:46.952844   43208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 20:58:46.971673   43208 ssh_runner.go:195] Run: openssl version
	I0311 20:58:46.977932   43208 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0311 20:58:46.978218   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 20:58:46.989932   43208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 20:58:46.994637   43208 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:58:46.994665   43208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 20:58:46.994699   43208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 20:58:47.000385   43208 command_runner.go:130] > 3ec20f2e
	I0311 20:58:47.000606   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 20:58:47.010285   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 20:58:47.024106   43208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:47.028811   43208 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:47.028832   43208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:47.028865   43208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 20:58:47.034877   43208 command_runner.go:130] > b5213941
	I0311 20:58:47.034942   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 20:58:47.045164   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 20:58:47.056701   43208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 20:58:47.061461   43208 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:58:47.061483   43208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 20:58:47.061508   43208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 20:58:47.067421   43208 command_runner.go:130] > 51391683
	I0311 20:58:47.067468   43208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 20:58:47.077210   43208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:58:47.082673   43208 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 20:58:47.082692   43208 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0311 20:58:47.082701   43208 command_runner.go:130] > Device: 253,1	Inode: 3150397     Links: 1
	I0311 20:58:47.082712   43208 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0311 20:58:47.082726   43208 command_runner.go:130] > Access: 2024-03-11 20:52:36.245703572 +0000
	I0311 20:58:47.082737   43208 command_runner.go:130] > Modify: 2024-03-11 20:52:36.245703572 +0000
	I0311 20:58:47.082749   43208 command_runner.go:130] > Change: 2024-03-11 20:52:36.245703572 +0000
	I0311 20:58:47.082758   43208 command_runner.go:130] >  Birth: 2024-03-11 20:52:36.245703572 +0000
	I0311 20:58:47.082816   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 20:58:47.088799   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.088853   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 20:58:47.097086   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.097148   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 20:58:47.102916   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.102968   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 20:58:47.108663   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.108726   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 20:58:47.114443   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.114500   43208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 20:58:47.120349   43208 command_runner.go:130] > Certificate will not expire
	I0311 20:58:47.120491   43208 kubeadm.go:391] StartCluster: {Name:multinode-232100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-232100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.76 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:58:47.120593   43208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 20:58:47.120623   43208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 20:58:47.162222   43208 command_runner.go:130] > 60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4
	I0311 20:58:47.162280   43208 command_runner.go:130] > 93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4
	I0311 20:58:47.162501   43208 command_runner.go:130] > f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67
	I0311 20:58:47.162579   43208 command_runner.go:130] > 54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce
	I0311 20:58:47.162715   43208 command_runner.go:130] > d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae
	I0311 20:58:47.162758   43208 command_runner.go:130] > 1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9
	I0311 20:58:47.162883   43208 command_runner.go:130] > d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73
	I0311 20:58:47.163094   43208 command_runner.go:130] > bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e
	I0311 20:58:47.164514   43208 cri.go:89] found id: "60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4"
	I0311 20:58:47.164532   43208 cri.go:89] found id: "93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4"
	I0311 20:58:47.164537   43208 cri.go:89] found id: "f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67"
	I0311 20:58:47.164541   43208 cri.go:89] found id: "54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce"
	I0311 20:58:47.164545   43208 cri.go:89] found id: "d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae"
	I0311 20:58:47.164549   43208 cri.go:89] found id: "1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9"
	I0311 20:58:47.164553   43208 cri.go:89] found id: "d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73"
	I0311 20:58:47.164557   43208 cri.go:89] found id: "bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e"
	I0311 20:58:47.164561   43208 cri.go:89] found id: ""
	I0311 20:58:47.164605   43208 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.024669960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710190962024651743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fd40724-34d2-4c07-848e-0da64e04cd34 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.025281173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48171992-cc83-4822-b744-ac2421a2f620 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.025333001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48171992-cc83-4822-b744-ac2421a2f620 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.025651152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c36d4a28cda0a23dce7dbdcbf0163612922d562817099d9f33ba4b885c952e2,PodSandboxId:d893dd416e46f624b92d9c86301cf2889aeeb8671c5b54362cad8b45aa03a3ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710190767903938656,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380,PodSandboxId:24b9cb8b3e7691ba85d86bf40ceb239d72cd8c4cfd499997d5d279c6c752c475,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710190734396907526,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd,PodSandboxId:7f4eb2a24247567189b58dd33d0e131343e06bbe6c1f4cc60da6afa79cc3b962,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710190734377851724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4e24a44a9d45bd39961e44a7307731ca971e7fdca4afd3c61cd8345f63be0b,PodSandboxId:035f90aaa5af743f4a5b7d86b49afd753bb5bbcb04948ae5c29fd4560cc5f4cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710190734268093040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},A
nnotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01,PodSandboxId:9df6e16d3753b9c8d229af005789555fd9232a7124a8ec7fb8bf0dbdb4846704,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710190734233725718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f,PodSandboxId:3e2f4ddb961f49ff4b984f5a2d9d6d408448b1612be896ba3a8e19ef3d2aa779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710190729352546592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61,PodSandboxId:96bc33e30251ec5824611091301165a9dfada84b03a0faa3ff58bd9b546a6331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710190729315390243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62,PodSandboxId:99ea1b5a303ae5f127d4c80ca9967c4b3b09a8def10a15d805a82bb49faf1bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710190729257981147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504,PodSandboxId:47925952096dea5fbc001d3041625e0aa99ae060a78ee8dfa8edd6c9dc95737c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710190729214367570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29dc13af7eab8845490b1e01d86973909a1244b41f9360951f5eea7f2bfa7ab,PodSandboxId:7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710190425647848479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4,PodSandboxId:e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710190383291987417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},Annotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4,PodSandboxId:62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710190383295096641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67,PodSandboxId:71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710190381547505184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce,PodSandboxId:f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710190378853642969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.kubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae,PodSandboxId:3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710190360438158106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9,PodSandboxId:e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710190360380490505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e,PodSandboxId:1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710190360326914328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661
d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73,PodSandboxId:7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710190360330675507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations
:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48171992-cc83-4822-b744-ac2421a2f620 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.069810000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30af7609-9d71-459d-8f48-e9ac92505c26 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.069876352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30af7609-9d71-459d-8f48-e9ac92505c26 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.071329957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ee4decd-7481-4f3d-9351-39c508d721d5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.071730704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710190962071710285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ee4decd-7481-4f3d-9351-39c508d721d5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.072714505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffd47b1d-2f0c-4d51-91d2-dfc818a55a79 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.072798146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffd47b1d-2f0c-4d51-91d2-dfc818a55a79 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.073542631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c36d4a28cda0a23dce7dbdcbf0163612922d562817099d9f33ba4b885c952e2,PodSandboxId:d893dd416e46f624b92d9c86301cf2889aeeb8671c5b54362cad8b45aa03a3ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710190767903938656,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380,PodSandboxId:24b9cb8b3e7691ba85d86bf40ceb239d72cd8c4cfd499997d5d279c6c752c475,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710190734396907526,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd,PodSandboxId:7f4eb2a24247567189b58dd33d0e131343e06bbe6c1f4cc60da6afa79cc3b962,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710190734377851724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4e24a44a9d45bd39961e44a7307731ca971e7fdca4afd3c61cd8345f63be0b,PodSandboxId:035f90aaa5af743f4a5b7d86b49afd753bb5bbcb04948ae5c29fd4560cc5f4cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710190734268093040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},A
nnotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01,PodSandboxId:9df6e16d3753b9c8d229af005789555fd9232a7124a8ec7fb8bf0dbdb4846704,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710190734233725718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f,PodSandboxId:3e2f4ddb961f49ff4b984f5a2d9d6d408448b1612be896ba3a8e19ef3d2aa779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710190729352546592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61,PodSandboxId:96bc33e30251ec5824611091301165a9dfada84b03a0faa3ff58bd9b546a6331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710190729315390243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62,PodSandboxId:99ea1b5a303ae5f127d4c80ca9967c4b3b09a8def10a15d805a82bb49faf1bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710190729257981147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504,PodSandboxId:47925952096dea5fbc001d3041625e0aa99ae060a78ee8dfa8edd6c9dc95737c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710190729214367570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29dc13af7eab8845490b1e01d86973909a1244b41f9360951f5eea7f2bfa7ab,PodSandboxId:7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710190425647848479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4,PodSandboxId:e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710190383291987417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},Annotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4,PodSandboxId:62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710190383295096641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67,PodSandboxId:71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710190381547505184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce,PodSandboxId:f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710190378853642969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.kubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae,PodSandboxId:3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710190360438158106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9,PodSandboxId:e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710190360380490505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e,PodSandboxId:1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710190360326914328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661
d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73,PodSandboxId:7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710190360330675507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations
:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffd47b1d-2f0c-4d51-91d2-dfc818a55a79 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.116679913Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d49d95e-9893-4050-afd0-47358c34de87 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.116747207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d49d95e-9893-4050-afd0-47358c34de87 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.118104137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a58c1f6-3485-423b-a47e-5610a8f2cc1b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.118716630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710190962118694158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a58c1f6-3485-423b-a47e-5610a8f2cc1b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.119494634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ace1501-05a4-4c66-8b7e-769c3ede88d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.119685123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ace1501-05a4-4c66-8b7e-769c3ede88d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.120159411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c36d4a28cda0a23dce7dbdcbf0163612922d562817099d9f33ba4b885c952e2,PodSandboxId:d893dd416e46f624b92d9c86301cf2889aeeb8671c5b54362cad8b45aa03a3ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710190767903938656,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380,PodSandboxId:24b9cb8b3e7691ba85d86bf40ceb239d72cd8c4cfd499997d5d279c6c752c475,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710190734396907526,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd,PodSandboxId:7f4eb2a24247567189b58dd33d0e131343e06bbe6c1f4cc60da6afa79cc3b962,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710190734377851724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4e24a44a9d45bd39961e44a7307731ca971e7fdca4afd3c61cd8345f63be0b,PodSandboxId:035f90aaa5af743f4a5b7d86b49afd753bb5bbcb04948ae5c29fd4560cc5f4cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710190734268093040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},A
nnotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01,PodSandboxId:9df6e16d3753b9c8d229af005789555fd9232a7124a8ec7fb8bf0dbdb4846704,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710190734233725718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f,PodSandboxId:3e2f4ddb961f49ff4b984f5a2d9d6d408448b1612be896ba3a8e19ef3d2aa779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710190729352546592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61,PodSandboxId:96bc33e30251ec5824611091301165a9dfada84b03a0faa3ff58bd9b546a6331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710190729315390243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62,PodSandboxId:99ea1b5a303ae5f127d4c80ca9967c4b3b09a8def10a15d805a82bb49faf1bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710190729257981147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504,PodSandboxId:47925952096dea5fbc001d3041625e0aa99ae060a78ee8dfa8edd6c9dc95737c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710190729214367570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29dc13af7eab8845490b1e01d86973909a1244b41f9360951f5eea7f2bfa7ab,PodSandboxId:7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710190425647848479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4,PodSandboxId:e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710190383291987417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},Annotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4,PodSandboxId:62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710190383295096641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67,PodSandboxId:71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710190381547505184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce,PodSandboxId:f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710190378853642969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.kubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae,PodSandboxId:3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710190360438158106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9,PodSandboxId:e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710190360380490505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e,PodSandboxId:1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710190360326914328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661
d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73,PodSandboxId:7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710190360330675507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations
:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ace1501-05a4-4c66-8b7e-769c3ede88d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.160554477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34735dfd-3aa7-4384-ac7f-d741dc1e4936 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.160620421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34735dfd-3aa7-4384-ac7f-d741dc1e4936 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.161730010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=632d9dad-25c0-46c1-bc80-b9b85178ad17 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.162276558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710190962162254304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=632d9dad-25c0-46c1-bc80-b9b85178ad17 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.162807155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46224d1a-5bf7-41ff-a9fa-74c712bf2372 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.162860425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46224d1a-5bf7-41ff-a9fa-74c712bf2372 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:02:42 multinode-232100 crio[2889]: time="2024-03-11 21:02:42.163261918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6c36d4a28cda0a23dce7dbdcbf0163612922d562817099d9f33ba4b885c952e2,PodSandboxId:d893dd416e46f624b92d9c86301cf2889aeeb8671c5b54362cad8b45aa03a3ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710190767903938656,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380,PodSandboxId:24b9cb8b3e7691ba85d86bf40ceb239d72cd8c4cfd499997d5d279c6c752c475,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710190734396907526,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd,PodSandboxId:7f4eb2a24247567189b58dd33d0e131343e06bbe6c1f4cc60da6afa79cc3b962,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710190734377851724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4e24a44a9d45bd39961e44a7307731ca971e7fdca4afd3c61cd8345f63be0b,PodSandboxId:035f90aaa5af743f4a5b7d86b49afd753bb5bbcb04948ae5c29fd4560cc5f4cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710190734268093040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},A
nnotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01,PodSandboxId:9df6e16d3753b9c8d229af005789555fd9232a7124a8ec7fb8bf0dbdb4846704,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710190734233725718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f,PodSandboxId:3e2f4ddb961f49ff4b984f5a2d9d6d408448b1612be896ba3a8e19ef3d2aa779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710190729352546592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61,PodSandboxId:96bc33e30251ec5824611091301165a9dfada84b03a0faa3ff58bd9b546a6331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710190729315390243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62,PodSandboxId:99ea1b5a303ae5f127d4c80ca9967c4b3b09a8def10a15d805a82bb49faf1bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710190729257981147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504,PodSandboxId:47925952096dea5fbc001d3041625e0aa99ae060a78ee8dfa8edd6c9dc95737c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710190729214367570,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a29dc13af7eab8845490b1e01d86973909a1244b41f9360951f5eea7f2bfa7ab,PodSandboxId:7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710190425647848479,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4hsnz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e93127ae-9454-4660-9b50-359d12adcffe,},Annotations:map[string]string{io.kubernetes.container.hash: 3fae844f,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93cc20c6fde7bedf1be2d404cdc194289ef689a7f4ac4739cf69d19a53dc3eb4,PodSandboxId:e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710190383291987417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32d28c9d-7ec7-44b0-9dbd-039296a7a274,},Annotations:map[string]string{io.kubernetes.container.hash: d2d33846,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4,PodSandboxId:62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710190383295096641,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5mg4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2b9427c-06b4-4f56-bc4a-4adc16471a65,},Annotations:map[string]string{io.kubernetes.container.hash: 221a18a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67,PodSandboxId:71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710190381547505184,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glj55,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a818af00-dedc-4df2-98f0-0f657141080e,},Annotations:map[string]string{io.kubernetes.container.hash: b38e3de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce,PodSandboxId:f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710190378853642969,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zdkdk,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 71289465-761a-45e9-aeea-487886492715,},Annotations:map[string]string{io.kubernetes.container.hash: e7344bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae,PodSandboxId:3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710190360438158106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 0e6c74ae7825d32a30354efaeda334ed,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9,PodSandboxId:e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710190360380490505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e47e5bbe85a59f76ef5b1b2f838a8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e,PodSandboxId:1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710190360326914328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c755fbdb681fc0a3c29e9c4a4faa661
d,},Annotations:map[string]string{io.kubernetes.container.hash: e42a8f7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73,PodSandboxId:7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710190360330675507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-232100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d430d93ac79511930f8ee4e584b8a9,},Annotations
:map[string]string{io.kubernetes.container.hash: 7aede132,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46224d1a-5bf7-41ff-a9fa-74c712bf2372 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6c36d4a28cda0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   d893dd416e46f       busybox-5b5d89c9d6-4hsnz
	397e799f82d3b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   24b9cb8b3e769       kindnet-glj55
	5635f2ddd04d7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   7f4eb2a242475       coredns-5dd5756b68-5mg4g
	5e4e24a44a9d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   035f90aaa5af7       storage-provisioner
	2a9ab4b51ae26       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   9df6e16d3753b       kube-proxy-zdkdk
	da33624f7e932       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   3e2f4ddb961f4       kube-scheduler-multinode-232100
	a2f1035ad4acd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   96bc33e30251e       etcd-multinode-232100
	93897777952ec       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   99ea1b5a303ae       kube-apiserver-multinode-232100
	9a946faba1cc5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   47925952096de       kube-controller-manager-multinode-232100
	a29dc13af7eab       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   7983479821d10       busybox-5b5d89c9d6-4hsnz
	60c4a4e869509       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   62bf0ad89abce       coredns-5dd5756b68-5mg4g
	93cc20c6fde7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   e7fd5611a7509       storage-provisioner
	f48ce4493a06c       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   71e18232ae358       kindnet-glj55
	54c8e9ef07bcb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   f3be5dce7a231       kube-proxy-zdkdk
	d9bb108f87baf       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   3e7917fa7ecc6       kube-scheduler-multinode-232100
	1ad2090b379ff       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   e7db90ecbf027       kube-controller-manager-multinode-232100
	d399b5316450e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   7e41c8b42456d       kube-apiserver-multinode-232100
	bc8d4f35d2f61       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   1ca9304474644       etcd-multinode-232100
	
	
	==> coredns [5635f2ddd04d78a6a5f5071d5db68a4c834509272fe0ccd30841272f215982dd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51308 - 64762 "HINFO IN 2907767183170153192.861951351699720548. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011178125s
	
	
	==> coredns [60c4a4e86950964dff2b1f5cfc521e797adb72f06f5ceb42969ceabe34a9a0e4] <==
	[INFO] 10.244.1.2:57034 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002033813s
	[INFO] 10.244.1.2:50664 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000197006s
	[INFO] 10.244.1.2:34648 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085765s
	[INFO] 10.244.1.2:46501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001280525s
	[INFO] 10.244.1.2:51451 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159813s
	[INFO] 10.244.1.2:35952 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068758s
	[INFO] 10.244.1.2:51667 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126112s
	[INFO] 10.244.0.3:42139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112438s
	[INFO] 10.244.0.3:49729 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078946s
	[INFO] 10.244.0.3:50607 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075093s
	[INFO] 10.244.0.3:33038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059279s
	[INFO] 10.244.1.2:33132 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152907s
	[INFO] 10.244.1.2:59285 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000322266s
	[INFO] 10.244.1.2:49834 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110922s
	[INFO] 10.244.1.2:45776 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086518s
	[INFO] 10.244.0.3:47399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102894s
	[INFO] 10.244.0.3:41422 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018856s
	[INFO] 10.244.0.3:40403 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118524s
	[INFO] 10.244.0.3:52549 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102736s
	[INFO] 10.244.1.2:39878 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188692s
	[INFO] 10.244.1.2:55958 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140596s
	[INFO] 10.244.1.2:39867 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098712s
	[INFO] 10.244.1.2:54626 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010438s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-232100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-232100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=multinode-232100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T20_52_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-232100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:02:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:58:52 +0000   Mon, 11 Mar 2024 20:52:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:58:52 +0000   Mon, 11 Mar 2024 20:52:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:58:52 +0000   Mon, 11 Mar 2024 20:52:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:58:52 +0000   Mon, 11 Mar 2024 20:53:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    multinode-232100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 02bc41b9a5d647028d026e2dfd08c841
	  System UUID:                02bc41b9-a5d6-4702-8d02-6e2dfd08c841
	  Boot ID:                    b4da8b15-bbef-4963-982e-9fb47ed83221
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4hsnz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 coredns-5dd5756b68-5mg4g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m43s
	  kube-system                 etcd-multinode-232100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m56s
	  kube-system                 kindnet-glj55                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-multinode-232100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-controller-manager-multinode-232100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-proxy-zdkdk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-multinode-232100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m43s                  kube-proxy       
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-232100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-232100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-232100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m56s                  kubelet          Node multinode-232100 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m56s                  kubelet          Node multinode-232100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s                  kubelet          Node multinode-232100 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m44s                  node-controller  Node multinode-232100 event: Registered Node multinode-232100 in Controller
	  Normal  NodeReady                9m40s                  kubelet          Node multinode-232100 status is now: NodeReady
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node multinode-232100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node multinode-232100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node multinode-232100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m37s                  node-controller  Node multinode-232100 event: Registered Node multinode-232100 in Controller
	
	
	Name:               multinode-232100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-232100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=multinode-232100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_11T20_59_34_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:59:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-232100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:00:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 11 Mar 2024 21:00:04 +0000   Mon, 11 Mar 2024 21:00:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 11 Mar 2024 21:00:04 +0000   Mon, 11 Mar 2024 21:00:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 11 Mar 2024 21:00:04 +0000   Mon, 11 Mar 2024 21:00:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 11 Mar 2024 21:00:04 +0000   Mon, 11 Mar 2024 21:00:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    multinode-232100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 a13ce8e1068746bbbb0a72e87a2164be
	  System UUID:                a13ce8e1-0687-46bb-bb0a-72e87a2164be
	  Boot ID:                    22efa84c-7c90-4e9e-a8f9-b47ed9c33339
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-99hff    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 kindnet-bgbtm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m8s
	  kube-system                 kube-proxy-lmrv2            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 9m4s                  kube-proxy       
	  Normal  Starting                 3m4s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m8s (x5 over 9m9s)   kubelet          Node multinode-232100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m8s (x5 over 9m9s)   kubelet          Node multinode-232100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m8s (x5 over 9m9s)   kubelet          Node multinode-232100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m1s                  kubelet          Node multinode-232100-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m8s (x5 over 3m10s)  kubelet          Node multinode-232100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x5 over 3m10s)  kubelet          Node multinode-232100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x5 over 3m10s)  kubelet          Node multinode-232100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m2s                  kubelet          Node multinode-232100-m02 status is now: NodeReady
	  Normal  NodeNotReady             107s                  node-controller  Node multinode-232100-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058602] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064037] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.189005] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.122829] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.261309] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +5.076626] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.065128] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.502659] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.583203] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.716230] systemd-fstab-generator[1276]: Ignoring "noauto" option for root device
	[  +0.076090] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.327725] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.814428] systemd-fstab-generator[1656]: Ignoring "noauto" option for root device
	[Mar11 20:53] kauditd_printk_skb: 80 callbacks suppressed
	[Mar11 20:58] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.151843] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +0.171943] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.143458] systemd-fstab-generator[2852]: Ignoring "noauto" option for root device
	[  +0.270687] systemd-fstab-generator[2876]: Ignoring "noauto" option for root device
	[  +0.729723] systemd-fstab-generator[2974]: Ignoring "noauto" option for root device
	[  +1.798092] systemd-fstab-generator[3104]: Ignoring "noauto" option for root device
	[  +5.834757] kauditd_printk_skb: 184 callbacks suppressed
	[Mar11 20:59] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.401521] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[ +21.035573] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [a2f1035ad4acda3cd4b709aaf0e0672c8f9cffb9b722dc8b3a7695164245dc61] <==
	{"level":"info","ts":"2024-03-11T20:58:49.872112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c switched to configuration voters=(5947142644092330044)"}
	{"level":"info","ts":"2024-03-11T20:58:49.872697Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d3dad3a9a0ef02b3","local-member-id":"52887eb9b9b3603c","added-peer-id":"52887eb9b9b3603c","added-peer-peer-urls":["https://192.168.39.134:2380"]}
	{"level":"info","ts":"2024-03-11T20:58:49.876488Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d3dad3a9a0ef02b3","local-member-id":"52887eb9b9b3603c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:58:49.876712Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:58:49.90162Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T20:58:49.90189Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"52887eb9b9b3603c","initial-advertise-peer-urls":["https://192.168.39.134:2380"],"listen-peer-urls":["https://192.168.39.134:2380"],"advertise-client-urls":["https://192.168.39.134:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.134:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T20:58:49.901947Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T20:58:49.906082Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-03-11T20:58:49.906168Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-03-11T20:58:51.323548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:51.32362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:51.323636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c received MsgPreVoteResp from 52887eb9b9b3603c at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:51.323661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c became candidate at term 3"}
	{"level":"info","ts":"2024-03-11T20:58:51.323671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c received MsgVoteResp from 52887eb9b9b3603c at term 3"}
	{"level":"info","ts":"2024-03-11T20:58:51.323679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52887eb9b9b3603c became leader at term 3"}
	{"level":"info","ts":"2024-03-11T20:58:51.323692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 52887eb9b9b3603c elected leader 52887eb9b9b3603c at term 3"}
	{"level":"info","ts":"2024-03-11T20:58:51.329709Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"52887eb9b9b3603c","local-member-attributes":"{Name:multinode-232100 ClientURLs:[https://192.168.39.134:2379]}","request-path":"/0/members/52887eb9b9b3603c/attributes","cluster-id":"d3dad3a9a0ef02b3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T20:58:51.329707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:58:51.329742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:58:51.331389Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.134:2379"}
	{"level":"info","ts":"2024-03-11T20:58:51.331637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T20:58:51.331926Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T20:58:51.331968Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T20:59:39.026632Z","caller":"traceutil/trace.go:171","msg":"trace[587982942] transaction","detail":"{read_only:false; response_revision:1018; number_of_response:1; }","duration":"210.342263ms","start":"2024-03-11T20:59:38.816261Z","end":"2024-03-11T20:59:39.026603Z","steps":["trace[587982942] 'process raft request'  (duration: 209.941596ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T21:00:13.09935Z","caller":"traceutil/trace.go:171","msg":"trace[905866632] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"162.186625ms","start":"2024-03-11T21:00:12.937125Z","end":"2024-03-11T21:00:13.099311Z","steps":["trace[905866632] 'process raft request'  (duration: 161.341618ms)"],"step_count":1}
	
	
	==> etcd [bc8d4f35d2f6169e64c28a6f66e6d5d888897669007ee3c6050f8fabd407d50e] <==
	{"level":"info","ts":"2024-03-11T20:54:19.53218Z","caller":"traceutil/trace.go:171","msg":"trace[627626441] linearizableReadLoop","detail":"{readStateIndex:620; appliedIndex:619; }","duration":"197.165558ms","start":"2024-03-11T20:54:19.334981Z","end":"2024-03-11T20:54:19.532146Z","steps":["trace[627626441] 'read index received'  (duration: 81.605505ms)","trace[627626441] 'applied index is now lower than readState.Index'  (duration: 115.55887ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:54:19.53238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.413663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T20:54:19.532529Z","caller":"traceutil/trace.go:171","msg":"trace[507023335] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:591; }","duration":"197.489053ms","start":"2024-03-11T20:54:19.334951Z","end":"2024-03-11T20:54:19.53244Z","steps":["trace[507023335] 'agreement among raft nodes before linearized reading'  (duration: 197.266808ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:54:19.532742Z","caller":"traceutil/trace.go:171","msg":"trace[751447747] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"219.32659ms","start":"2024-03-11T20:54:19.313392Z","end":"2024-03-11T20:54:19.532719Z","steps":["trace[751447747] 'process raft request'  (duration: 103.252923ms)","trace[751447747] 'compare'  (duration: 113.509559ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:54:21.126637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.338815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-232100-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T20:54:21.126718Z","caller":"traceutil/trace.go:171","msg":"trace[605120623] range","detail":"{range_begin:/registry/csinodes/multinode-232100-m03; range_end:; response_count:0; response_revision:608; }","duration":"136.429877ms","start":"2024-03-11T20:54:20.990269Z","end":"2024-03-11T20:54:21.126699Z","steps":["trace[605120623] 'range keys from in-memory index tree'  (duration: 136.228144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:54:21.483599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.020513ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6934573859999376945 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-232100-m03\" mod_revision:596 > success:<request_put:<key:\"/registry/minions/multinode-232100-m03\" value_size:2405 >> failure:<request_range:<key:\"/registry/minions/multinode-232100-m03\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-11T20:54:21.483791Z","caller":"traceutil/trace.go:171","msg":"trace[1161434873] linearizableReadLoop","detail":"{readStateIndex:641; appliedIndex:640; }","duration":"217.016182ms","start":"2024-03-11T20:54:21.266757Z","end":"2024-03-11T20:54:21.483773Z","steps":["trace[1161434873] 'read index received'  (duration: 60.55493ms)","trace[1161434873] 'applied index is now lower than readState.Index'  (duration: 156.459824ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T20:54:21.483893Z","caller":"traceutil/trace.go:171","msg":"trace[1318036055] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"283.696441ms","start":"2024-03-11T20:54:21.200186Z","end":"2024-03-11T20:54:21.483883Z","steps":["trace[1318036055] 'process raft request'  (duration: 127.313917ms)","trace[1318036055] 'compare'  (duration: 155.847468ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:54:21.484136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.40439ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-vctfc\" ","response":"range_response_count:1 size:3440"}
	{"level":"info","ts":"2024-03-11T20:54:21.484189Z","caller":"traceutil/trace.go:171","msg":"trace[1090795334] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-vctfc; range_end:; response_count:1; response_revision:610; }","duration":"217.453287ms","start":"2024-03-11T20:54:21.266726Z","end":"2024-03-11T20:54:21.48418Z","steps":["trace[1090795334] 'agreement among raft nodes before linearized reading'  (duration: 217.377031ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:54:21.484078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.347659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T20:54:21.484366Z","caller":"traceutil/trace.go:171","msg":"trace[1784533007] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:610; }","duration":"146.706763ms","start":"2024-03-11T20:54:21.337649Z","end":"2024-03-11T20:54:21.484356Z","steps":["trace[1784533007] 'agreement among raft nodes before linearized reading'  (duration: 146.325041ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:54:21.643628Z","caller":"traceutil/trace.go:171","msg":"trace[1790173369] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"148.622767ms","start":"2024-03-11T20:54:21.494991Z","end":"2024-03-11T20:54:21.643613Z","steps":["trace[1790173369] 'process raft request'  (duration: 146.376767ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:57:13.818371Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-11T20:57:13.818512Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-232100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.134:2380"],"advertise-client-urls":["https://192.168.39.134:2379"]}
	{"level":"warn","ts":"2024-03-11T20:57:13.822105Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:57:13.822207Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	WARNING: 2024/03/11 20:57:13 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-11T20:57:13.882766Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.134:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:57:13.882828Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.134:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-11T20:57:13.884406Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"52887eb9b9b3603c","current-leader-member-id":"52887eb9b9b3603c"}
	{"level":"info","ts":"2024-03-11T20:57:13.887359Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-03-11T20:57:13.887487Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.134:2380"}
	{"level":"info","ts":"2024-03-11T20:57:13.887497Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-232100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.134:2380"],"advertise-client-urls":["https://192.168.39.134:2379"]}
	
	
	==> kernel <==
	 21:02:42 up 10 min,  0 users,  load average: 0.07, 0.21, 0.17
	Linux multinode-232100 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [397e799f82d3b4a2fd977229b1f254d0562771524af131ef247cb56cc2835380] <==
	I0311 21:01:35.614409       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 21:01:45.626922       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 21:01:45.627097       1 main.go:227] handling current node
	I0311 21:01:45.627134       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 21:01:45.627176       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 21:01:55.720651       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 21:01:55.720926       1 main.go:227] handling current node
	I0311 21:01:55.720960       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 21:01:55.721160       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 21:02:05.734207       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 21:02:05.734265       1 main.go:227] handling current node
	I0311 21:02:05.734284       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 21:02:05.734291       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 21:02:15.745648       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 21:02:15.745707       1 main.go:227] handling current node
	I0311 21:02:15.745724       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 21:02:15.745730       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 21:02:25.758980       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 21:02:25.759097       1 main.go:227] handling current node
	I0311 21:02:25.759114       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 21:02:25.759121       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 21:02:35.763917       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 21:02:35.764143       1 main.go:227] handling current node
	I0311 21:02:35.764226       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 21:02:35.764323       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f48ce4493a06c8cd032c3b310646c4cbb41e350161b5ef429482bb3040b17a67] <==
	I0311 20:56:32.532803       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:56:42.541078       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:56:42.541159       1 main.go:227] handling current node
	I0311 20:56:42.541170       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:56:42.541177       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:56:42.541268       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:56:42.541298       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:56:52.546915       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:56:52.546968       1 main.go:227] handling current node
	I0311 20:56:52.546996       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:56:52.547059       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:56:52.547183       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:56:52.547216       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:57:02.560654       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:57:02.560713       1 main.go:227] handling current node
	I0311 20:57:02.560723       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:57:02.560730       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:57:02.560902       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:57:02.560938       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	I0311 20:57:12.568631       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0311 20:57:12.568693       1 main.go:227] handling current node
	I0311 20:57:12.568704       1 main.go:223] Handling node with IPs: map[192.168.39.4:{}]
	I0311 20:57:12.568710       1 main.go:250] Node multinode-232100-m02 has CIDR [10.244.1.0/24] 
	I0311 20:57:12.568827       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0311 20:57:12.568864       1 main.go:250] Node multinode-232100-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [93897777952ec8ae9811c2a98cb03afd1a676c3227f8089f4ac3077bf0d19f62] <==
	I0311 20:58:52.808258       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 20:58:52.808520       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0311 20:58:52.808525       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0311 20:58:52.893654       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 20:58:52.894255       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 20:58:52.902715       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 20:58:52.902797       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 20:58:52.908543       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 20:58:52.909335       1 aggregator.go:166] initial CRD sync complete...
	I0311 20:58:52.910713       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 20:58:52.910769       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 20:58:52.910794       1 cache.go:39] Caches are synced for autoregister controller
	I0311 20:58:52.918510       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 20:58:52.925876       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0311 20:58:52.925916       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0311 20:58:52.954627       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0311 20:58:52.960679       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0311 20:58:53.795397       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 20:58:55.727584       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0311 20:58:55.882474       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0311 20:58:55.898388       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0311 20:58:55.971358       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 20:58:55.980174       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0311 20:59:05.381982       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 20:59:05.418066       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [d399b5316450e90f3694bce7bff29ed126ae340e8af98ef9eafb753f11462f73] <==
	I0311 20:57:13.831418       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0311 20:57:13.831617       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0311 20:57:13.831669       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0311 20:57:13.831712       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0311 20:57:13.831755       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0311 20:57:13.831773       1 establishing_controller.go:87] Shutting down EstablishingController
	I0311 20:57:13.831795       1 naming_controller.go:302] Shutting down NamingConditionController
	I0311 20:57:13.831817       1 controller.go:162] Shutting down OpenAPI controller
	I0311 20:57:13.832423       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0311 20:57:13.832650       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0311 20:57:13.832669       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0311 20:57:13.832680       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0311 20:57:13.833106       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0311 20:57:13.838294       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0311 20:57:13.838640       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c1be1c0)}: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0311 20:57:13.838725       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0311 20:57:13.838952       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0311 20:57:13.839989       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0311 20:57:13.840164       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0311 20:57:13.840303       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0311 20:57:13.840426       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0311 20:57:13.843407       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0311 20:57:13.843563       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0311 20:57:13.826469       1 controller.go:129] Ending legacy_token_tracking_controller
	I0311 20:57:13.843735       1 controller.go:130] Shutting down legacy_token_tracking_controller
	
	
	==> kube-controller-manager [1ad2090b379ff6c47613e83952056a4775099b86f57b0c58918b0d01f184d7b9] <==
	I0311 20:54:20.801933       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:54:20.803480       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232100-m03\" does not exist"
	I0311 20:54:20.818309       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-232100-m03" podCIDRs=["10.244.2.0/24"]
	I0311 20:54:20.838888       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vctfc"
	I0311 20:54:20.841298       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8xzct"
	I0311 20:54:23.177297       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-232100-m03"
	I0311 20:54:23.177544       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-232100-m03 event: Registered Node multinode-232100-m03 in Controller"
	I0311 20:54:27.514663       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:54:58.197254       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:54:58.198114       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-232100-m03 event: Removing Node multinode-232100-m03 from Controller"
	I0311 20:55:00.969712       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:55:00.970181       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232100-m03\" does not exist"
	I0311 20:55:00.993319       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-232100-m03" podCIDRs=["10.244.3.0/24"]
	I0311 20:55:03.199186       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-232100-m03 event: Registered Node multinode-232100-m03 in Controller"
	I0311 20:55:06.955075       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m03"
	I0311 20:55:48.233076       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m03"
	I0311 20:55:48.233165       1 event.go:307] "Event occurred" object="multinode-232100-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-232100-m02 status is now: NodeNotReady"
	I0311 20:55:48.241875       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-232100-m03 status is now: NodeNotReady"
	I0311 20:55:48.255992       1 event.go:307] "Event occurred" object="kube-system/kindnet-bgbtm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.259722       1 event.go:307] "Event occurred" object="kube-system/kindnet-8xzct" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.277664       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-lmrv2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.277715       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vctfc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.293962       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8xhwm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 20:55:48.303942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.455556ms"
	I0311 20:55:48.304314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.17µs"
	
	
	==> kube-controller-manager [9a946faba1cc5368b7c09a7140ae7389a7382b0775ac4652445421a7b855a504] <==
	I0311 20:59:40.556761       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 20:59:40.578423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.207µs"
	I0311 20:59:40.595966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.874µs"
	I0311 20:59:42.845000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.174641ms"
	I0311 20:59:42.845420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.386µs"
	I0311 20:59:45.482594       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-99hff" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-99hff"
	I0311 21:00:00.123590       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 21:00:00.485745       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-232100-m03 event: Removing Node multinode-232100-m03 from Controller"
	I0311 21:00:02.809650       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-232100-m03\" does not exist"
	I0311 21:00:02.811152       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 21:00:02.833357       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-232100-m03" podCIDRs=["10.244.2.0/24"]
	I0311 21:00:05.486135       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-232100-m03 event: Registered Node multinode-232100-m03 in Controller"
	I0311 21:00:14.907539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 21:00:20.830802       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-232100-m02"
	I0311 21:00:25.502237       1 event.go:307] "Event occurred" object="multinode-232100-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-232100-m03 event: Removing Node multinode-232100-m03 from Controller"
	I0311 21:00:55.517585       1 event.go:307] "Event occurred" object="multinode-232100-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-232100-m02 status is now: NodeNotReady"
	I0311 21:00:55.529680       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-99hff" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 21:00:55.546580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.05184ms"
	I0311 21:00:55.546669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.06µs"
	I0311 21:00:55.547104       1 event.go:307] "Event occurred" object="kube-system/kindnet-bgbtm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 21:00:55.563232       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-lmrv2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0311 21:01:05.332242       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-8xzct"
	I0311 21:01:05.371450       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-8xzct"
	I0311 21:01:05.371501       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-vctfc"
	I0311 21:01:05.418271       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-vctfc"
	
	
	==> kube-proxy [2a9ab4b51ae261322c62338c6b69c1425d5c5e5616be3454f9a8389b28e80f01] <==
	I0311 20:58:54.558844       1 server_others.go:69] "Using iptables proxy"
	I0311 20:58:54.571078       1 node.go:141] Successfully retrieved node IP: 192.168.39.134
	I0311 20:58:54.631369       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:58:54.631427       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:58:54.636716       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:58:54.636799       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:58:54.637141       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:58:54.637177       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:58:54.637961       1 config.go:188] "Starting service config controller"
	I0311 20:58:54.638127       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:58:54.638187       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:58:54.638192       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:58:54.638650       1 config.go:315] "Starting node config controller"
	I0311 20:58:54.638693       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:58:54.740157       1 shared_informer.go:318] Caches are synced for node config
	I0311 20:58:54.740185       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:58:54.740212       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [54c8e9ef07bcb48501144c7876db60d5f81d518c2657ef1c86c921967c49fcce] <==
	I0311 20:52:59.114435       1 server_others.go:69] "Using iptables proxy"
	I0311 20:52:59.130558       1 node.go:141] Successfully retrieved node IP: 192.168.39.134
	I0311 20:52:59.278550       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:52:59.278598       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:52:59.283407       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:52:59.283468       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:52:59.283626       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:52:59.283666       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:52:59.284945       1 config.go:188] "Starting service config controller"
	I0311 20:52:59.285125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:52:59.285223       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:52:59.285244       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:52:59.287395       1 config.go:315] "Starting node config controller"
	I0311 20:52:59.287434       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:52:59.386249       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 20:52:59.386285       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:52:59.387606       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d9bb108f87baf24ab126bcbc64251ab0929eca58f98016ddfeef08e833117aae] <==
	E0311 20:52:43.223905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 20:52:43.227109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 20:52:43.227181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 20:52:44.065300       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 20:52:44.065407       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 20:52:44.103530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 20:52:44.103650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 20:52:44.113213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 20:52:44.113232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 20:52:44.191517       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:52:44.191576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 20:52:44.249764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 20:52:44.249818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 20:52:44.260214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 20:52:44.260266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 20:52:44.330582       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 20:52:44.330631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 20:52:44.428087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 20:52:44.428137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 20:52:44.437703       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 20:52:44.437751       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0311 20:52:46.111110       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 20:57:13.813864       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0311 20:57:13.816234       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0311 20:57:13.816487       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [da33624f7e932928d864da657e73ab7a1c23148c2b6f4efa9af40a45842f644f] <==
	I0311 20:58:50.648637       1 serving.go:348] Generated self-signed cert in-memory
	W0311 20:58:52.861471       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 20:58:52.862093       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 20:58:52.864095       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 20:58:52.864223       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 20:58:52.912364       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0311 20:58:52.912594       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:58:52.923870       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 20:58:52.924164       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 20:58:52.926350       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 20:58:52.924191       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 20:58:53.026855       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:00:48 multinode-232100 kubelet[3111]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:00:48 multinode-232100 kubelet[3111]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.667466    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pode47e5bbe85a59f76ef5b1b2f838a8fd1/crio-e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820: Error finding container e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820: Status 404 returned error can't find the container with id e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.667771    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod71289465-761a-45e9-aeea-487886492715/crio-f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce: Error finding container f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce: Status 404 returned error can't find the container with id f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.668218    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode93127ae-9454-4660-9b50-359d12adcffe/crio-7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97: Error finding container 7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97: Status 404 returned error can't find the container with id 7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.668460    3111 manager.go:1106] Failed to create existing container: /kubepods/poda818af00-dedc-4df2-98f0-0f657141080e/crio-71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66: Error finding container 71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66: Status 404 returned error can't find the container with id 71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.668721    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod0e6c74ae7825d32a30354efaeda334ed/crio-3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde: Error finding container 3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde: Status 404 returned error can't find the container with id 3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.668949    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod32d28c9d-7ec7-44b0-9dbd-039296a7a274/crio-e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5: Error finding container e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5: Status 404 returned error can't find the container with id e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.669201    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod03d430d93ac79511930f8ee4e584b8a9/crio-7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb: Error finding container 7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb: Status 404 returned error can't find the container with id 7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.669625    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/podc2b9427c-06b4-4f56-bc4a-4adc16471a65/crio-62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722: Error finding container 62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722: Status 404 returned error can't find the container with id 62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722
	Mar 11 21:00:48 multinode-232100 kubelet[3111]: E0311 21:00:48.669799    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/podc755fbdb681fc0a3c29e9c4a4faa661d/crio-1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86: Error finding container 1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86: Status 404 returned error can't find the container with id 1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.614819    3111 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:01:48 multinode-232100 kubelet[3111]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:01:48 multinode-232100 kubelet[3111]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:01:48 multinode-232100 kubelet[3111]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:01:48 multinode-232100 kubelet[3111]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.667726    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/podc755fbdb681fc0a3c29e9c4a4faa661d/crio-1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86: Error finding container 1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86: Status 404 returned error can't find the container with id 1ca93044746442a04be69b2ebd404b5db4c2dcbe40cff201b24ae138566bea86
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.668302    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod0e6c74ae7825d32a30354efaeda334ed/crio-3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde: Error finding container 3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde: Status 404 returned error can't find the container with id 3e7917fa7ecc66ebdc195ee3e869b2d5bebc2c531f428f93ae710b2e8352ffde
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.668667    3111 manager.go:1106] Failed to create existing container: /kubepods/poda818af00-dedc-4df2-98f0-0f657141080e/crio-71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66: Error finding container 71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66: Status 404 returned error can't find the container with id 71e18232ae35877ecd025204cb923e7e7bf5404aa9dc2aacf48a000a4256ca66
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.668927    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod71289465-761a-45e9-aeea-487886492715/crio-f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce: Error finding container f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce: Status 404 returned error can't find the container with id f3be5dce7a23175327f2fa646c81d0afbf66167f8825dbf374a04732696c8cce
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.669425    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode93127ae-9454-4660-9b50-359d12adcffe/crio-7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97: Error finding container 7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97: Status 404 returned error can't find the container with id 7983479821d106d6a641170be828eeb5b542efa68c1871aca55cea3e0b888b97
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.669701    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/podc2b9427c-06b4-4f56-bc4a-4adc16471a65/crio-62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722: Error finding container 62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722: Status 404 returned error can't find the container with id 62bf0ad89abcec63781641812558d1c959c9149d2deaa23580625f86080b8722
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.670171    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod03d430d93ac79511930f8ee4e584b8a9/crio-7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb: Error finding container 7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb: Status 404 returned error can't find the container with id 7e41c8b42456d2493fe86752392f794fea900532f4adec2793c092568998d3cb
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.670430    3111 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod32d28c9d-7ec7-44b0-9dbd-039296a7a274/crio-e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5: Error finding container e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5: Status 404 returned error can't find the container with id e7fd5611a750923d84d246b71eb6ad5a0f41fa6dbcbb912da26f93ef4bff2cf5
	Mar 11 21:01:48 multinode-232100 kubelet[3111]: E0311 21:01:48.670716    3111 manager.go:1106] Failed to create existing container: /kubepods/burstable/pode47e5bbe85a59f76ef5b1b2f838a8fd1/crio-e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820: Error finding container e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820: Status 404 returned error can't find the container with id e7db90ecbf0272ae06a8f30cb3f7de170a02058b3e3426f682ac1fc1d34da820
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:02:41.743808   44668 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18358-11004/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-232100 -n multinode-232100
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-232100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.54s)

                                                
                                    
x
+
TestPreload (281.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-581589 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0311 21:06:58.807630   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 21:07:38.935252   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-581589 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m20.510404594s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-581589 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-581589 image pull gcr.io/k8s-minikube/busybox: (1.128428527s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-581589
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-581589: exit status 82 (2m0.471321037s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-581589"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-581589 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-03-11 21:10:47.283108545 +0000 UTC m=+3655.454782857
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-581589 -n test-preload-581589
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-581589 -n test-preload-581589: exit status 3 (18.426158223s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:11:05.705052   46990 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.237:22: connect: no route to host
	E0311 21:11:05.705072   46990 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.237:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-581589" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-581589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-581589
--- FAIL: TestPreload (281.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.899011078s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171195] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-171195" primary control-plane node in "kubernetes-upgrade-171195" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 21:13:03.707267   47948 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:13:03.707362   47948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:13:03.707369   47948 out.go:304] Setting ErrFile to fd 2...
	I0311 21:13:03.707373   47948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:13:03.707584   47948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:13:03.708162   47948 out.go:298] Setting JSON to false
	I0311 21:13:03.709020   47948 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6933,"bootTime":1710184651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:13:03.709081   47948 start.go:139] virtualization: kvm guest
	I0311 21:13:03.711059   47948 out.go:177] * [kubernetes-upgrade-171195] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:13:03.713189   47948 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:13:03.713192   47948 notify.go:220] Checking for updates...
	I0311 21:13:03.716684   47948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:13:03.719150   47948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:13:03.721576   47948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:13:03.723897   47948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:13:03.726759   47948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:13:03.728425   47948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:13:03.769697   47948 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 21:13:03.771723   47948 start.go:297] selected driver: kvm2
	I0311 21:13:03.771745   47948 start.go:901] validating driver "kvm2" against <nil>
	I0311 21:13:03.771760   47948 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:13:03.772448   47948 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:13:03.772543   47948 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:13:03.787704   47948 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:13:03.787751   47948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 21:13:03.787998   47948 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 21:13:03.788027   47948 cni.go:84] Creating CNI manager for ""
	I0311 21:13:03.788035   47948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:13:03.788051   47948 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 21:13:03.788130   47948 start.go:340] cluster config:
	{Name:kubernetes-upgrade-171195 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:13:03.788237   47948 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:13:03.789975   47948 out.go:177] * Starting "kubernetes-upgrade-171195" primary control-plane node in "kubernetes-upgrade-171195" cluster
	I0311 21:13:03.791270   47948 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:13:03.791303   47948 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:13:03.791316   47948 cache.go:56] Caching tarball of preloaded images
	I0311 21:13:03.791386   47948 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:13:03.791398   47948 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:13:03.791705   47948 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/config.json ...
	I0311 21:13:03.791730   47948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/config.json: {Name:mk711422c6591adffddc11ce1f8cfe40eb8a3f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:13:03.791856   47948 start.go:360] acquireMachinesLock for kubernetes-upgrade-171195: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:13:03.791906   47948 start.go:364] duration metric: took 33.789µs to acquireMachinesLock for "kubernetes-upgrade-171195"
	I0311 21:13:03.791931   47948 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-171195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:13:03.791992   47948 start.go:125] createHost starting for "" (driver="kvm2")
	I0311 21:13:03.794112   47948 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 21:13:03.794273   47948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:13:03.794322   47948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:13:03.809591   47948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0311 21:13:03.810022   47948 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:13:03.810585   47948 main.go:141] libmachine: Using API Version  1
	I0311 21:13:03.810613   47948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:13:03.811007   47948 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:13:03.811210   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetMachineName
	I0311 21:13:03.811362   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:13:03.811521   47948 start.go:159] libmachine.API.Create for "kubernetes-upgrade-171195" (driver="kvm2")
	I0311 21:13:03.811549   47948 client.go:168] LocalClient.Create starting
	I0311 21:13:03.811581   47948 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 21:13:03.811617   47948 main.go:141] libmachine: Decoding PEM data...
	I0311 21:13:03.811642   47948 main.go:141] libmachine: Parsing certificate...
	I0311 21:13:03.811703   47948 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 21:13:03.811728   47948 main.go:141] libmachine: Decoding PEM data...
	I0311 21:13:03.811745   47948 main.go:141] libmachine: Parsing certificate...
	I0311 21:13:03.811801   47948 main.go:141] libmachine: Running pre-create checks...
	I0311 21:13:03.811820   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .PreCreateCheck
	I0311 21:13:03.812680   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetConfigRaw
	I0311 21:13:03.813921   47948 main.go:141] libmachine: Creating machine...
	I0311 21:13:03.813937   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .Create
	I0311 21:13:03.814064   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Creating KVM machine...
	I0311 21:13:03.815411   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found existing default KVM network
	I0311 21:13:03.816176   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:03.816028   48026 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015aa0}
	I0311 21:13:03.816227   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | created network xml: 
	I0311 21:13:03.816253   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | <network>
	I0311 21:13:03.816270   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |   <name>mk-kubernetes-upgrade-171195</name>
	I0311 21:13:03.816283   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |   <dns enable='no'/>
	I0311 21:13:03.816294   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |   
	I0311 21:13:03.816308   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0311 21:13:03.816321   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |     <dhcp>
	I0311 21:13:03.816335   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0311 21:13:03.816348   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |     </dhcp>
	I0311 21:13:03.816360   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |   </ip>
	I0311 21:13:03.816373   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG |   
	I0311 21:13:03.816393   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | </network>
	I0311 21:13:03.816406   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | 
	I0311 21:13:03.821342   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | trying to create private KVM network mk-kubernetes-upgrade-171195 192.168.39.0/24...
	I0311 21:13:03.893255   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | private KVM network mk-kubernetes-upgrade-171195 192.168.39.0/24 created
	I0311 21:13:03.893287   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195 ...
	I0311 21:13:03.893302   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:03.893234   48026 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:13:03.893328   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 21:13:03.893385   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 21:13:04.122397   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:04.122283   48026 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa...
	I0311 21:13:04.567339   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:04.567233   48026 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/kubernetes-upgrade-171195.rawdisk...
	I0311 21:13:04.567369   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Writing magic tar header
	I0311 21:13:04.567387   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Writing SSH key tar header
	I0311 21:13:04.567400   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:04.567349   48026 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195 ...
	I0311 21:13:04.567468   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195
	I0311 21:13:04.567523   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 21:13:04.567553   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195 (perms=drwx------)
	I0311 21:13:04.567568   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:13:04.567582   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 21:13:04.567599   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 21:13:04.567617   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 21:13:04.567631   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 21:13:04.567644   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Checking permissions on dir: /home/jenkins
	I0311 21:13:04.567667   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Checking permissions on dir: /home
	I0311 21:13:04.567682   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 21:13:04.567698   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 21:13:04.567710   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 21:13:04.567725   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Creating domain...
	I0311 21:13:04.567737   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Skipping /home - not owner
	I0311 21:13:04.568679   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) define libvirt domain using xml: 
	I0311 21:13:04.568701   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) <domain type='kvm'>
	I0311 21:13:04.568712   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   <name>kubernetes-upgrade-171195</name>
	I0311 21:13:04.568725   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   <memory unit='MiB'>2200</memory>
	I0311 21:13:04.568748   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   <vcpu>2</vcpu>
	I0311 21:13:04.568762   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   <features>
	I0311 21:13:04.568776   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <acpi/>
	I0311 21:13:04.568786   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <apic/>
	I0311 21:13:04.568798   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <pae/>
	I0311 21:13:04.568815   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     
	I0311 21:13:04.568827   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   </features>
	I0311 21:13:04.568846   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   <cpu mode='host-passthrough'>
	I0311 21:13:04.568858   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   
	I0311 21:13:04.568873   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   </cpu>
	I0311 21:13:04.568885   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   <os>
	I0311 21:13:04.568893   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <type>hvm</type>
	I0311 21:13:04.568905   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <boot dev='cdrom'/>
	I0311 21:13:04.568916   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <boot dev='hd'/>
	I0311 21:13:04.568937   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <bootmenu enable='no'/>
	I0311 21:13:04.568954   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   </os>
	I0311 21:13:04.568966   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   <devices>
	I0311 21:13:04.568975   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <disk type='file' device='cdrom'>
	I0311 21:13:04.568984   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/boot2docker.iso'/>
	I0311 21:13:04.568992   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <target dev='hdc' bus='scsi'/>
	I0311 21:13:04.568998   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <readonly/>
	I0311 21:13:04.569005   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     </disk>
	I0311 21:13:04.569011   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <disk type='file' device='disk'>
	I0311 21:13:04.569019   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 21:13:04.569051   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/kubernetes-upgrade-171195.rawdisk'/>
	I0311 21:13:04.569076   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <target dev='hda' bus='virtio'/>
	I0311 21:13:04.569088   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     </disk>
	I0311 21:13:04.569100   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <interface type='network'>
	I0311 21:13:04.569115   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <source network='mk-kubernetes-upgrade-171195'/>
	I0311 21:13:04.569127   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <model type='virtio'/>
	I0311 21:13:04.569139   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     </interface>
	I0311 21:13:04.569150   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <interface type='network'>
	I0311 21:13:04.569162   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <source network='default'/>
	I0311 21:13:04.569177   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <model type='virtio'/>
	I0311 21:13:04.569190   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     </interface>
	I0311 21:13:04.569201   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <serial type='pty'>
	I0311 21:13:04.569212   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <target port='0'/>
	I0311 21:13:04.569223   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     </serial>
	I0311 21:13:04.569235   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <console type='pty'>
	I0311 21:13:04.569249   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <target type='serial' port='0'/>
	I0311 21:13:04.569266   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     </console>
	I0311 21:13:04.569277   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     <rng model='virtio'>
	I0311 21:13:04.569290   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)       <backend model='random'>/dev/random</backend>
	I0311 21:13:04.569300   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     </rng>
	I0311 21:13:04.569308   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     
	I0311 21:13:04.569322   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)     
	I0311 21:13:04.569333   47948 main.go:141] libmachine: (kubernetes-upgrade-171195)   </devices>
	I0311 21:13:04.569344   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) </domain>
	I0311 21:13:04.569355   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) 
	I0311 21:13:04.573647   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:81:b7:45 in network default
	I0311 21:13:04.574250   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Ensuring networks are active...
	I0311 21:13:04.574268   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:04.574956   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Ensuring network default is active
	I0311 21:13:04.575248   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Ensuring network mk-kubernetes-upgrade-171195 is active
	I0311 21:13:04.575807   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Getting domain xml...
	I0311 21:13:04.576809   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Creating domain...
	I0311 21:13:05.879542   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Waiting to get IP...
	I0311 21:13:05.880486   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:05.880959   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:05.880983   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:05.880890   48026 retry.go:31] will retry after 264.943975ms: waiting for machine to come up
	I0311 21:13:06.147217   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:06.147669   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:06.147694   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:06.147632   48026 retry.go:31] will retry after 288.763153ms: waiting for machine to come up
	I0311 21:13:06.438058   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:06.438557   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:06.438578   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:06.438517   48026 retry.go:31] will retry after 331.34685ms: waiting for machine to come up
	I0311 21:13:06.771728   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:06.772172   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:06.772244   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:06.772180   48026 retry.go:31] will retry after 584.759733ms: waiting for machine to come up
	I0311 21:13:07.358910   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:07.359298   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:07.359325   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:07.359250   48026 retry.go:31] will retry after 477.548468ms: waiting for machine to come up
	I0311 21:13:07.838824   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:07.839212   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:07.839242   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:07.839179   48026 retry.go:31] will retry after 862.893913ms: waiting for machine to come up
	I0311 21:13:08.703815   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:08.704235   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:08.704266   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:08.704189   48026 retry.go:31] will retry after 850.182161ms: waiting for machine to come up
	I0311 21:13:09.555567   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:09.556122   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:09.556143   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:09.556051   48026 retry.go:31] will retry after 1.073039946s: waiting for machine to come up
	I0311 21:13:10.630936   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:10.631373   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:10.631404   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:10.631324   48026 retry.go:31] will retry after 1.139661444s: waiting for machine to come up
	I0311 21:13:11.772530   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:11.772906   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:11.772935   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:11.772859   48026 retry.go:31] will retry after 1.501868675s: waiting for machine to come up
	I0311 21:13:13.276280   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:13.276673   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:13.276702   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:13.276627   48026 retry.go:31] will retry after 2.817596924s: waiting for machine to come up
	I0311 21:13:16.095340   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:16.095782   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:16.095814   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:16.095734   48026 retry.go:31] will retry after 3.18488903s: waiting for machine to come up
	I0311 21:13:19.281995   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:19.282428   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:19.282453   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:19.282388   48026 retry.go:31] will retry after 3.762059513s: waiting for machine to come up
	I0311 21:13:23.049277   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:23.049697   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find current IP address of domain kubernetes-upgrade-171195 in network mk-kubernetes-upgrade-171195
	I0311 21:13:23.049727   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | I0311 21:13:23.049638   48026 retry.go:31] will retry after 4.263028758s: waiting for machine to come up
	I0311 21:13:27.313972   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.314430   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Found IP for machine: 192.168.39.241
	I0311 21:13:27.314445   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Reserving static IP address...
	I0311 21:13:27.314462   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has current primary IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.314845   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-171195", mac: "52:54:00:08:90:45", ip: "192.168.39.241"} in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.385493   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Getting to WaitForSSH function...
	I0311 21:13:27.385523   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Reserved static IP address: 192.168.39.241
	I0311 21:13:27.385537   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Waiting for SSH to be available...
	I0311 21:13:27.387840   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.388242   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:08:90:45}
	I0311 21:13:27.388276   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.388370   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Using SSH client type: external
	I0311 21:13:27.388398   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa (-rw-------)
	I0311 21:13:27.388429   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:13:27.388450   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | About to run SSH command:
	I0311 21:13:27.388463   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | exit 0
	I0311 21:13:27.512842   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | SSH cmd err, output: <nil>: 
	I0311 21:13:27.513111   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) KVM machine creation complete!
	I0311 21:13:27.513404   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetConfigRaw
	I0311 21:13:27.513995   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:13:27.514175   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:13:27.514333   47948 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 21:13:27.514408   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetState
	I0311 21:13:27.515954   47948 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 21:13:27.515969   47948 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 21:13:27.515975   47948 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 21:13:27.515981   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:27.518568   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.518925   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:27.518954   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.519133   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:27.519321   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:27.519494   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:27.519609   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:27.519793   47948 main.go:141] libmachine: Using SSH client type: native
	I0311 21:13:27.519983   47948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0311 21:13:27.520000   47948 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 21:13:27.628188   47948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:13:27.628217   47948 main.go:141] libmachine: Detecting the provisioner...
	I0311 21:13:27.628228   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:27.631185   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.631575   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:27.631619   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.631766   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:27.631976   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:27.632160   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:27.632337   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:27.632521   47948 main.go:141] libmachine: Using SSH client type: native
	I0311 21:13:27.632685   47948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0311 21:13:27.632695   47948 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 21:13:27.737768   47948 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 21:13:27.737855   47948 main.go:141] libmachine: found compatible host: buildroot
	I0311 21:13:27.737868   47948 main.go:141] libmachine: Provisioning with buildroot...
	I0311 21:13:27.737875   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetMachineName
	I0311 21:13:27.738130   47948 buildroot.go:166] provisioning hostname "kubernetes-upgrade-171195"
	I0311 21:13:27.738161   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetMachineName
	I0311 21:13:27.738350   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:27.741018   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.741357   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:27.741381   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.741534   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:27.741707   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:27.741836   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:27.741977   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:27.742144   47948 main.go:141] libmachine: Using SSH client type: native
	I0311 21:13:27.742295   47948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0311 21:13:27.742308   47948 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-171195 && echo "kubernetes-upgrade-171195" | sudo tee /etc/hostname
	I0311 21:13:27.861535   47948 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-171195
	
	I0311 21:13:27.861570   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:27.864375   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.864700   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:27.864725   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.864874   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:27.865052   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:27.865241   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:27.865354   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:27.865507   47948 main.go:141] libmachine: Using SSH client type: native
	I0311 21:13:27.865654   47948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0311 21:13:27.865671   47948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-171195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-171195/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-171195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:13:27.978239   47948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:13:27.978267   47948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:13:27.978294   47948 buildroot.go:174] setting up certificates
	I0311 21:13:27.978306   47948 provision.go:84] configureAuth start
	I0311 21:13:27.978317   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetMachineName
	I0311 21:13:27.978594   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetIP
	I0311 21:13:27.981016   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.981365   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:27.981396   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.981521   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:27.983670   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.984000   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:27.984022   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:27.984207   47948 provision.go:143] copyHostCerts
	I0311 21:13:27.984267   47948 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:13:27.984277   47948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:13:27.984339   47948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:13:27.984420   47948 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:13:27.984428   47948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:13:27.984450   47948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:13:27.984503   47948 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:13:27.984510   47948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:13:27.984530   47948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:13:27.984608   47948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-171195 san=[127.0.0.1 192.168.39.241 kubernetes-upgrade-171195 localhost minikube]
	I0311 21:13:28.034487   47948 provision.go:177] copyRemoteCerts
	I0311 21:13:28.034535   47948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:13:28.034552   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:28.036942   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.037251   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.037282   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.037408   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:28.037593   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:28.037717   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:28.037851   47948 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa Username:docker}
	I0311 21:13:28.120295   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0311 21:13:28.145653   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:13:28.170288   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:13:28.195417   47948 provision.go:87] duration metric: took 217.097617ms to configureAuth
	I0311 21:13:28.195449   47948 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:13:28.195654   47948 config.go:182] Loaded profile config "kubernetes-upgrade-171195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:13:28.195727   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:28.198381   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.198739   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.198769   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.198927   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:28.199118   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:28.199309   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:28.199436   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:28.199587   47948 main.go:141] libmachine: Using SSH client type: native
	I0311 21:13:28.199743   47948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0311 21:13:28.199756   47948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:13:28.480549   47948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:13:28.480574   47948 main.go:141] libmachine: Checking connection to Docker...
	I0311 21:13:28.480585   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetURL
	I0311 21:13:28.481771   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Using libvirt version 6000000
	I0311 21:13:28.484051   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.484380   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.484400   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.484594   47948 main.go:141] libmachine: Docker is up and running!
	I0311 21:13:28.484618   47948 main.go:141] libmachine: Reticulating splines...
	I0311 21:13:28.484625   47948 client.go:171] duration metric: took 24.673065344s to LocalClient.Create
	I0311 21:13:28.484650   47948 start.go:167] duration metric: took 24.673128942s to libmachine.API.Create "kubernetes-upgrade-171195"
	I0311 21:13:28.484664   47948 start.go:293] postStartSetup for "kubernetes-upgrade-171195" (driver="kvm2")
	I0311 21:13:28.484677   47948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:13:28.484700   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:13:28.484978   47948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:13:28.485011   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:28.487172   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.487516   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.487557   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.487697   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:28.487881   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:28.488072   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:28.488211   47948 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa Username:docker}
	I0311 21:13:28.572276   47948 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:13:28.576867   47948 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:13:28.576888   47948 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:13:28.576941   47948 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:13:28.577005   47948 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:13:28.577084   47948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:13:28.587817   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:13:28.612931   47948 start.go:296] duration metric: took 128.253737ms for postStartSetup
	I0311 21:13:28.612976   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetConfigRaw
	I0311 21:13:28.613527   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetIP
	I0311 21:13:28.616305   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.616669   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.616697   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.616908   47948 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/config.json ...
	I0311 21:13:28.617118   47948 start.go:128] duration metric: took 24.825115637s to createHost
	I0311 21:13:28.617141   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:28.619246   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.619532   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.619559   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.619685   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:28.619866   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:28.620008   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:28.620145   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:28.620282   47948 main.go:141] libmachine: Using SSH client type: native
	I0311 21:13:28.620474   47948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0311 21:13:28.620490   47948 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0311 21:13:28.726102   47948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710191608.694467831
	
	I0311 21:13:28.726129   47948 fix.go:216] guest clock: 1710191608.694467831
	I0311 21:13:28.726140   47948 fix.go:229] Guest: 2024-03-11 21:13:28.694467831 +0000 UTC Remote: 2024-03-11 21:13:28.617130448 +0000 UTC m=+24.976267977 (delta=77.337383ms)
	I0311 21:13:28.726173   47948 fix.go:200] guest clock delta is within tolerance: 77.337383ms
	I0311 21:13:28.726178   47948 start.go:83] releasing machines lock for "kubernetes-upgrade-171195", held for 24.934260884s
	I0311 21:13:28.726206   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:13:28.726500   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetIP
	I0311 21:13:28.729465   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.729794   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.729833   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.729992   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:13:28.730470   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:13:28.730679   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:13:28.730767   47948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:13:28.730815   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:28.730904   47948 ssh_runner.go:195] Run: cat /version.json
	I0311 21:13:28.730926   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:13:28.733287   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.733556   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.733635   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.733671   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.733820   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:28.733911   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:28.733934   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:28.734013   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:28.734097   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:13:28.734170   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:28.734258   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:13:28.734314   47948 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa Username:docker}
	I0311 21:13:28.734397   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:13:28.734520   47948 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa Username:docker}
	I0311 21:13:28.847576   47948 ssh_runner.go:195] Run: systemctl --version
	I0311 21:13:28.854860   47948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:13:29.032294   47948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:13:29.041914   47948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:13:29.041986   47948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:13:29.067168   47948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:13:29.067189   47948 start.go:494] detecting cgroup driver to use...
	I0311 21:13:29.067241   47948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:13:29.085148   47948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:13:29.099377   47948 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:13:29.099428   47948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:13:29.113350   47948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:13:29.127172   47948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:13:29.241485   47948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:13:29.404842   47948 docker.go:233] disabling docker service ...
	I0311 21:13:29.404911   47948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:13:29.428198   47948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:13:29.443054   47948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:13:29.574595   47948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:13:29.705213   47948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:13:29.724002   47948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:13:29.747656   47948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:13:29.747717   47948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:13:29.762924   47948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:13:29.762971   47948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:13:29.774367   47948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:13:29.786187   47948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:13:29.798135   47948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:13:29.810955   47948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:13:29.821710   47948 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:13:29.821754   47948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:13:29.837877   47948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:13:29.849021   47948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:13:29.974835   47948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:13:30.143502   47948 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:13:30.143573   47948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:13:30.149405   47948 start.go:562] Will wait 60s for crictl version
	I0311 21:13:30.149470   47948 ssh_runner.go:195] Run: which crictl
	I0311 21:13:30.154173   47948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:13:30.203047   47948 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:13:30.203141   47948 ssh_runner.go:195] Run: crio --version
	I0311 21:13:30.234020   47948 ssh_runner.go:195] Run: crio --version
	I0311 21:13:30.266720   47948 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:13:30.267895   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetIP
	I0311 21:13:30.271369   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:30.271904   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:13:20 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:13:30.271935   47948 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:13:30.272122   47948 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 21:13:30.277144   47948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:13:30.293144   47948 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-171195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-171195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:13:30.293283   47948 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:13:30.293339   47948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:13:30.335526   47948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:13:30.335604   47948 ssh_runner.go:195] Run: which lz4
	I0311 21:13:30.341324   47948 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0311 21:13:30.346325   47948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:13:30.346355   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:13:32.394948   47948 crio.go:444] duration metric: took 2.053654534s to copy over tarball
	I0311 21:13:32.395028   47948 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:13:35.320708   47948 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.925652057s)
	I0311 21:13:35.320761   47948 crio.go:451] duration metric: took 2.925759352s to extract the tarball
	I0311 21:13:35.320771   47948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:13:35.368917   47948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:13:35.427695   47948 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:13:35.427722   47948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:13:35.427803   47948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:13:35.427815   47948 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:13:35.427839   47948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:13:35.427859   47948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:13:35.427880   47948 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:13:35.427903   47948 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:13:35.427805   47948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:13:35.427821   47948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:13:35.429487   47948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:13:35.429488   47948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:13:35.429492   47948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:13:35.429547   47948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:13:35.429550   47948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:13:35.429498   47948 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:13:35.429568   47948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:13:35.429843   47948 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:13:35.582031   47948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:13:35.593106   47948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:13:35.597909   47948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:13:35.625546   47948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:13:35.627779   47948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:13:35.664174   47948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:13:35.678423   47948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:13:35.678467   47948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:13:35.678502   47948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:13:35.678554   47948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:13:35.678618   47948 ssh_runner.go:195] Run: which crictl
	I0311 21:13:35.678512   47948 ssh_runner.go:195] Run: which crictl
	I0311 21:13:35.715116   47948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:13:35.715124   47948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:13:35.741299   47948 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:13:35.741347   47948 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:13:35.741396   47948 ssh_runner.go:195] Run: which crictl
	I0311 21:13:35.809964   47948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:13:35.810010   47948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:13:35.810058   47948 ssh_runner.go:195] Run: which crictl
	I0311 21:13:35.810347   47948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:13:35.810388   47948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:13:35.810445   47948 ssh_runner.go:195] Run: which crictl
	I0311 21:13:35.837777   47948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:13:35.837860   47948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:13:35.838001   47948 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:13:35.838037   47948 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:13:35.838075   47948 ssh_runner.go:195] Run: which crictl
	I0311 21:13:35.979585   47948 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:13:35.979633   47948 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:13:35.979643   47948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:13:35.979680   47948 ssh_runner.go:195] Run: which crictl
	I0311 21:13:35.979705   47948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:13:35.979753   47948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:13:35.979772   47948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:13:35.979820   47948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:13:35.979832   47948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:13:35.991993   47948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:13:36.060240   47948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:13:36.107258   47948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:13:36.107312   47948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:13:36.107365   47948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:13:36.109129   47948 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:13:36.109194   47948 cache_images.go:92] duration metric: took 681.455805ms to LoadCachedImages
	W0311 21:13:36.109265   47948 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0311 21:13:36.109282   47948 kubeadm.go:928] updating node { 192.168.39.241 8443 v1.20.0 crio true true} ...
	I0311 21:13:36.109424   47948 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-171195 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:13:36.109504   47948 ssh_runner.go:195] Run: crio config
	I0311 21:13:36.166116   47948 cni.go:84] Creating CNI manager for ""
	I0311 21:13:36.166140   47948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:13:36.166155   47948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:13:36.166178   47948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-171195 NodeName:kubernetes-upgrade-171195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:13:36.166337   47948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-171195"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:13:36.166420   47948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:13:36.177788   47948 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:13:36.177853   47948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:13:36.188706   47948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0311 21:13:36.208286   47948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:13:36.226688   47948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0311 21:13:36.244860   47948 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I0311 21:13:36.249246   47948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:13:36.268562   47948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:13:36.418135   47948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:13:36.437414   47948 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195 for IP: 192.168.39.241
	I0311 21:13:36.437437   47948 certs.go:194] generating shared ca certs ...
	I0311 21:13:36.437456   47948 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:13:36.437622   47948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:13:36.437679   47948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:13:36.437692   47948 certs.go:256] generating profile certs ...
	I0311 21:13:36.437764   47948 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/client.key
	I0311 21:13:36.437783   47948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/client.crt with IP's: []
	I0311 21:13:36.583924   47948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/client.crt ...
	I0311 21:13:36.583953   47948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/client.crt: {Name:mka58605ca58e6b97546458e8972a14d3b468eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:13:36.584144   47948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/client.key ...
	I0311 21:13:36.584162   47948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/client.key: {Name:mkdd28f6566317e474f3c2a5a0116d3a3602a47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:13:36.584276   47948 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.key.2f225e83
	I0311 21:13:36.584297   47948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.crt.2f225e83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.241]
	I0311 21:13:36.728773   47948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.crt.2f225e83 ...
	I0311 21:13:36.728801   47948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.crt.2f225e83: {Name:mk93d4f8598f20a5c50db0460fdb9eadb4816f16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:13:36.728971   47948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.key.2f225e83 ...
	I0311 21:13:36.728989   47948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.key.2f225e83: {Name:mk4e8b266ef93d176c3f1689548da274023c0746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:13:36.729107   47948 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.crt.2f225e83 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.crt
	I0311 21:13:36.729287   47948 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.key.2f225e83 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.key
	I0311 21:13:36.729438   47948 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/proxy-client.key
	I0311 21:13:36.729480   47948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/proxy-client.crt with IP's: []
	I0311 21:13:37.020018   47948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/proxy-client.crt ...
	I0311 21:13:37.020044   47948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/proxy-client.crt: {Name:mkadb64784877811232c0e9c3c4f80ce27a9d00d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:13:37.020224   47948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/proxy-client.key ...
	I0311 21:13:37.020247   47948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/proxy-client.key: {Name:mk96e1c1206395a3cf5dcbfae6ba895db6884d88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:13:37.020480   47948 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:13:37.020531   47948 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:13:37.020547   47948 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:13:37.020585   47948 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:13:37.020626   47948 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:13:37.020712   47948 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:13:37.020798   47948 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:13:37.021393   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:13:37.051839   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:13:37.079012   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:13:37.106708   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:13:37.133369   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0311 21:13:37.160312   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:13:37.186680   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:13:37.213777   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:13:37.244137   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:13:37.274007   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:13:37.309156   47948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:13:37.349674   47948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:13:37.370604   47948 ssh_runner.go:195] Run: openssl version
	I0311 21:13:37.377565   47948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:13:37.390577   47948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:13:37.395996   47948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:13:37.396055   47948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:13:37.402741   47948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:13:37.415513   47948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:13:37.428086   47948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:13:37.433350   47948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:13:37.433399   47948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:13:37.439794   47948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:13:37.451251   47948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:13:37.462845   47948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:13:37.467876   47948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:13:37.467915   47948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:13:37.473931   47948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:13:37.485047   47948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:13:37.489632   47948 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 21:13:37.489676   47948 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-171195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-171195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:13:37.489740   47948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:13:37.489790   47948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:13:37.528497   47948 cri.go:89] found id: ""
	I0311 21:13:37.528583   47948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 21:13:37.539144   47948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:13:37.548957   47948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:13:37.558806   47948 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:13:37.558822   47948 kubeadm.go:156] found existing configuration files:
	
	I0311 21:13:37.558863   47948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:13:37.568034   47948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:13:37.568083   47948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:13:37.577941   47948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:13:37.587237   47948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:13:37.587287   47948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:13:37.597043   47948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:13:37.606499   47948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:13:37.606552   47948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:13:37.615918   47948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:13:37.625443   47948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:13:37.625488   47948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:13:37.634887   47948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:13:37.770051   47948 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:13:37.770328   47948 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:13:37.926058   47948 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:13:37.926208   47948 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:13:37.926396   47948 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:13:38.111138   47948 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:13:38.113172   47948 out.go:204]   - Generating certificates and keys ...
	I0311 21:13:38.113269   47948 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:13:38.113352   47948 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:13:38.281505   47948 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 21:13:38.456678   47948 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 21:13:38.636245   47948 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 21:13:38.756575   47948 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 21:13:39.172054   47948 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 21:13:39.172290   47948 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171195 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0311 21:13:39.299476   47948 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 21:13:39.300029   47948 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171195 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0311 21:13:39.523498   47948 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 21:13:39.863212   47948 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 21:13:39.921715   47948 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 21:13:39.922071   47948 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:13:39.998658   47948 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:13:40.096849   47948 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:13:40.228903   47948 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:13:40.535425   47948 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:13:40.555528   47948 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:13:40.557694   47948 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:13:40.557779   47948 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:13:40.701464   47948 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:13:40.703350   47948 out.go:204]   - Booting up control plane ...
	I0311 21:13:40.703484   47948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:13:40.708020   47948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:13:40.709170   47948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:13:40.710005   47948 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:13:40.714827   47948 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:14:20.706638   47948 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:14:20.707111   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:14:20.707372   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:14:25.707929   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:14:25.708191   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:14:35.707151   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:14:35.707403   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:14:55.706791   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:14:55.707202   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:15:35.709191   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:15:35.709468   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:15:35.709485   47948 kubeadm.go:309] 
	I0311 21:15:35.709550   47948 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:15:35.709635   47948 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:15:35.709662   47948 kubeadm.go:309] 
	I0311 21:15:35.709718   47948 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:15:35.709767   47948 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:15:35.709897   47948 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:15:35.709910   47948 kubeadm.go:309] 
	I0311 21:15:35.710050   47948 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:15:35.710099   47948 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:15:35.710152   47948 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:15:35.710162   47948 kubeadm.go:309] 
	I0311 21:15:35.710325   47948 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:15:35.710439   47948 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:15:35.710450   47948 kubeadm.go:309] 
	I0311 21:15:35.710590   47948 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:15:35.710717   47948 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:15:35.710823   47948 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:15:35.710885   47948 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:15:35.710893   47948 kubeadm.go:309] 
	I0311 21:15:35.712256   47948 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:15:35.712369   47948 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:15:35.712518   47948 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0311 21:15:35.712590   47948 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171195 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171195 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171195 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171195 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:15:35.712643   47948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:15:37.810960   47948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.09828726s)
	I0311 21:15:37.811042   47948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:15:37.828701   47948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:15:37.842597   47948 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:15:37.842621   47948 kubeadm.go:156] found existing configuration files:
	
	I0311 21:15:37.842670   47948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:15:37.855434   47948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:15:37.855495   47948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:15:37.869042   47948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:15:37.881752   47948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:15:37.881808   47948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:15:37.896127   47948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:15:37.906224   47948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:15:37.906279   47948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:15:37.916303   47948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:15:37.927055   47948 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:15:37.927108   47948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:15:37.938299   47948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:15:38.019816   47948 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:15:38.019947   47948 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:15:38.202469   47948 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:15:38.202716   47948 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:15:38.202847   47948 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:15:38.435120   47948 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:15:38.437050   47948 out.go:204]   - Generating certificates and keys ...
	I0311 21:15:38.437154   47948 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:15:38.437235   47948 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:15:38.437318   47948 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:15:38.437397   47948 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:15:38.437481   47948 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:15:38.437704   47948 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:15:38.438388   47948 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:15:38.438779   47948 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:15:38.439357   47948 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:15:38.439742   47948 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:15:38.439803   47948 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:15:38.439883   47948 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:15:38.816920   47948 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:15:39.055165   47948 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:15:39.153661   47948 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:15:39.694542   47948 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:15:39.710939   47948 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:15:39.712368   47948 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:15:39.712436   47948 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:15:39.873117   47948 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:15:39.874653   47948 out.go:204]   - Booting up control plane ...
	I0311 21:15:39.874782   47948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:15:39.894685   47948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:15:39.896123   47948 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:15:39.897175   47948 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:15:39.901420   47948 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:16:19.904158   47948 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:16:19.904268   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:16:19.904506   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:16:24.904875   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:16:24.905101   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:16:34.905893   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:16:34.906195   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:16:54.905176   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:16:54.905389   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:17:34.905479   47948 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:17:34.905665   47948 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:17:34.905705   47948 kubeadm.go:309] 
	I0311 21:17:34.905772   47948 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:17:34.905831   47948 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:17:34.905841   47948 kubeadm.go:309] 
	I0311 21:17:34.905894   47948 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:17:34.905928   47948 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:17:34.906029   47948 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:17:34.906042   47948 kubeadm.go:309] 
	I0311 21:17:34.906190   47948 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:17:34.906247   47948 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:17:34.906299   47948 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:17:34.906309   47948 kubeadm.go:309] 
	I0311 21:17:34.906423   47948 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:17:34.906538   47948 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:17:34.906548   47948 kubeadm.go:309] 
	I0311 21:17:34.906677   47948 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:17:34.906775   47948 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:17:34.906887   47948 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:17:34.906989   47948 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:17:34.907004   47948 kubeadm.go:309] 
	I0311 21:17:34.907848   47948 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:17:34.907939   47948 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:17:34.908030   47948 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:17:34.908083   47948 kubeadm.go:393] duration metric: took 3m57.418409801s to StartCluster
	I0311 21:17:34.908125   47948 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:17:34.908195   47948 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:17:34.953460   47948 cri.go:89] found id: ""
	I0311 21:17:34.953483   47948 logs.go:276] 0 containers: []
	W0311 21:17:34.953490   47948 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:17:34.953496   47948 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:17:34.953545   47948 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:17:34.992265   47948 cri.go:89] found id: ""
	I0311 21:17:34.992287   47948 logs.go:276] 0 containers: []
	W0311 21:17:34.992294   47948 logs.go:278] No container was found matching "etcd"
	I0311 21:17:34.992300   47948 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:17:34.992344   47948 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:17:35.027573   47948 cri.go:89] found id: ""
	I0311 21:17:35.027597   47948 logs.go:276] 0 containers: []
	W0311 21:17:35.027605   47948 logs.go:278] No container was found matching "coredns"
	I0311 21:17:35.027611   47948 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:17:35.027672   47948 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:17:35.066418   47948 cri.go:89] found id: ""
	I0311 21:17:35.066440   47948 logs.go:276] 0 containers: []
	W0311 21:17:35.066447   47948 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:17:35.066453   47948 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:17:35.066518   47948 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:17:35.106231   47948 cri.go:89] found id: ""
	I0311 21:17:35.106256   47948 logs.go:276] 0 containers: []
	W0311 21:17:35.106264   47948 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:17:35.106270   47948 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:17:35.106325   47948 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:17:35.144218   47948 cri.go:89] found id: ""
	I0311 21:17:35.144247   47948 logs.go:276] 0 containers: []
	W0311 21:17:35.144258   47948 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:17:35.144266   47948 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:17:35.144325   47948 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:17:35.178192   47948 cri.go:89] found id: ""
	I0311 21:17:35.178214   47948 logs.go:276] 0 containers: []
	W0311 21:17:35.178222   47948 logs.go:278] No container was found matching "kindnet"
	I0311 21:17:35.178230   47948 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:17:35.178241   47948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:17:35.297873   47948 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:17:35.297896   47948 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:17:35.297909   47948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:17:35.402502   47948 logs.go:123] Gathering logs for container status ...
	I0311 21:17:35.402538   47948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:17:35.451486   47948 logs.go:123] Gathering logs for kubelet ...
	I0311 21:17:35.451511   47948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:17:35.506129   47948 logs.go:123] Gathering logs for dmesg ...
	I0311 21:17:35.506160   47948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0311 21:17:35.521708   47948 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:17:35.521745   47948 out.go:239] * 
	* 
	W0311 21:17:35.521794   47948 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:17:35.521813   47948 out.go:239] * 
	* 
	W0311 21:17:35.522758   47948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:17:35.526753   47948 out.go:177] 
	W0311 21:17:35.528196   47948 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:17:35.528236   47948 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:17:35.528251   47948 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:17:35.530096   47948 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-171195
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-171195: (2.507441291s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171195 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-171195 status --format={{.Host}}: exit status 7 (75.318641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0311 21:17:38.935658   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.410531617s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-171195 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.129832ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171195] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-171195
	    minikube start -p kubernetes-upgrade-171195 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1711952 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-171195 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171195 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.652313668s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-11 21:19:24.386261367 +0000 UTC m=+4172.557935671
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-171195 -n kubernetes-upgrade-171195
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171195 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-171195 logs -n 25: (2.250977947s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p offline-crio-153995                | offline-crio-153995       | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
	| start   | -p cert-expiration-228186             | cert-expiration-228186    | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:16 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-890519 stop           | minikube                  | jenkins | v1.26.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
	| start   | -p stopped-upgrade-890519             | stopped-upgrade-890519    | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-169709             | running-upgrade-169709    | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:17 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-890519             | stopped-upgrade-890519    | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
	| start   | -p cert-options-406431                | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-169709             | running-upgrade-169709    | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p force-systemd-env-922319           | force-systemd-env-922319  | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-406431 ssh               | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-406431 -- sudo        | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-406431                | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p pause-717098 --memory=2048         | pause-717098              | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-922319           | force-systemd-env-922319  | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:18 UTC |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:19 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-717098                       | pause-717098              | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:19 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-228186             | cert-expiration-228186    | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC | 11 Mar 24 21:19 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC | 11 Mar 24 21:19 UTC |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC |                     |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:19:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:19:22.001509   55133 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:19:22.001689   55133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:19:22.001700   55133 out.go:304] Setting ErrFile to fd 2...
	I0311 21:19:22.001706   55133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:19:22.002018   55133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:19:22.002876   55133 out.go:298] Setting JSON to false
	I0311 21:19:22.004269   55133 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7311,"bootTime":1710184651,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:19:22.004354   55133 start.go:139] virtualization: kvm guest
	I0311 21:19:22.006651   55133 out.go:177] * [NoKubernetes-364658] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:19:22.008858   55133 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:19:22.008870   55133 notify.go:220] Checking for updates...
	I0311 21:19:22.010339   55133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:19:22.011649   55133 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:19:22.012902   55133 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:19:22.014177   55133 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:19:22.015559   55133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:19:22.017447   55133 config.go:182] Loaded profile config "cert-expiration-228186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:19:22.017577   55133 config.go:182] Loaded profile config "kubernetes-upgrade-171195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:19:22.017736   55133 config.go:182] Loaded profile config "pause-717098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:19:22.017756   55133 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0311 21:19:22.017839   55133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:19:22.062249   55133 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 21:19:22.063580   55133 start.go:297] selected driver: kvm2
	I0311 21:19:22.063595   55133 start.go:901] validating driver "kvm2" against <nil>
	I0311 21:19:22.063608   55133 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:19:22.063986   55133 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0311 21:19:22.064074   55133 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:19:22.064154   55133 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:19:22.085130   55133 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:19:22.085188   55133 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 21:19:22.085853   55133 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0311 21:19:22.086044   55133 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 21:19:22.086069   55133 cni.go:84] Creating CNI manager for ""
	I0311 21:19:22.086078   55133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:19:22.086086   55133 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 21:19:22.086102   55133 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0311 21:19:22.086160   55133 start.go:340] cluster config:
	{Name:NoKubernetes-364658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-364658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:19:22.086315   55133 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:19:22.088313   55133 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-364658
	I0311 21:19:22.089808   55133 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0311 21:19:22.115524   55133 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0311 21:19:22.115692   55133 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/NoKubernetes-364658/config.json ...
	I0311 21:19:22.115721   55133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/NoKubernetes-364658/config.json: {Name:mk874a0f97a8d1095e694f4d88f72690462c8da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:22.115877   55133 start.go:360] acquireMachinesLock for NoKubernetes-364658: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:19:22.115921   55133 start.go:364] duration metric: took 21.348µs to acquireMachinesLock for "NoKubernetes-364658"
	I0311 21:19:22.115935   55133 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-364658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-364658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:19:22.116059   55133 start.go:125] createHost starting for "" (driver="kvm2")
	I0311 21:19:20.536934   54656 api_server.go:279] https://192.168.39.241:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:19:20.536971   54656 api_server.go:103] status: https://192.168.39.241:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:19:20.536989   54656 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0311 21:19:20.592926   54656 api_server.go:279] https://192.168.39.241:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:19:20.592970   54656 api_server.go:103] status: https://192.168.39.241:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:19:21.020386   54656 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0311 21:19:21.031576   54656 api_server.go:279] https://192.168.39.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:19:21.031611   54656 api_server.go:103] status: https://192.168.39.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:19:21.521148   54656 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0311 21:19:21.532635   54656 api_server.go:279] https://192.168.39.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:19:21.532676   54656 api_server.go:103] status: https://192.168.39.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:19:22.020831   54656 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0311 21:19:22.039239   54656 api_server.go:279] https://192.168.39.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:19:22.039276   54656 api_server.go:103] status: https://192.168.39.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:19:22.520953   54656 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0311 21:19:22.526714   54656 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
	ok
	I0311 21:19:22.536566   54656 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:19:22.536594   54656 api_server.go:131] duration metric: took 5.516375306s to wait for apiserver health ...
	I0311 21:19:22.536605   54656 cni.go:84] Creating CNI manager for ""
	I0311 21:19:22.536619   54656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:19:22.538686   54656 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:19:18.199404   54538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:19:18.215950   54538 api_server.go:72] duration metric: took 1.017674186s to wait for apiserver process to appear ...
	I0311 21:19:18.215976   54538 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:19:18.215996   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:18.216450   54538 api_server.go:269] stopped: https://192.168.50.163:8443/healthz: Get "https://192.168.50.163:8443/healthz": dial tcp 192.168.50.163:8443: connect: connection refused
	I0311 21:19:18.717100   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:22.192201   54538 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:19:22.192251   54538 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:19:22.192266   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:22.295116   54538 api_server.go:279] https://192.168.50.163:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:19:22.295144   54538 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:19:22.295161   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:22.335287   54538 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:19:22.335315   54538 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:19:22.716797   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:22.729588   54538 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:19:22.729627   54538 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:19:22.540228   54656 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:19:22.555984   54656 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:19:22.585858   54656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:19:22.604789   54656 system_pods.go:59] 8 kube-system pods found
	I0311 21:19:22.604839   54656 system_pods.go:61] "coredns-76f75df574-h742x" [4fc64d72-cefb-455d-8082-36b07f597d3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:19:22.604851   54656 system_pods.go:61] "coredns-76f75df574-mfzpc" [c393371c-7907-4438-9010-f0c291348e0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:19:22.604869   54656 system_pods.go:61] "etcd-kubernetes-upgrade-171195" [faa1b6c0-8780-4a70-bdae-d03aef713ffb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:19:22.604878   54656 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-171195" [28789413-3e71-4e1a-ba38-af8aec4a8669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:19:22.604893   54656 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-171195" [547075de-ad59-43da-99ec-e6cd5a77e967] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:19:22.604902   54656 system_pods.go:61] "kube-proxy-jqt67" [0559e2d7-cd0d-4342-8fd8-9957a2b64a83] Running
	I0311 21:19:22.604910   54656 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-171195" [b7a732b1-5103-4f9d-a573-19f9ef184094] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:19:22.604921   54656 system_pods.go:61] "storage-provisioner" [a8e32a95-4ea1-439a-9f19-31f905d595f3] Running
	I0311 21:19:22.604929   54656 system_pods.go:74] duration metric: took 19.049027ms to wait for pod list to return data ...
	I0311 21:19:22.604937   54656 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:19:22.609224   54656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:19:22.609254   54656 node_conditions.go:123] node cpu capacity is 2
	I0311 21:19:22.609268   54656 node_conditions.go:105] duration metric: took 4.321623ms to run NodePressure ...
	I0311 21:19:22.609303   54656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:19:22.938025   54656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:19:22.957191   54656 ops.go:34] apiserver oom_adj: -16
	I0311 21:19:22.957212   54656 kubeadm.go:591] duration metric: took 9.200758562s to restartPrimaryControlPlane
	I0311 21:19:22.957219   54656 kubeadm.go:393] duration metric: took 9.34251622s to StartCluster
	I0311 21:19:22.957237   54656 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:22.957308   54656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:19:22.958854   54656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:22.959072   54656 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:19:22.960652   54656 out.go:177] * Verifying Kubernetes components...
	I0311 21:19:22.959258   54656 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:19:22.959342   54656 config.go:182] Loaded profile config "kubernetes-upgrade-171195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:19:22.960770   54656 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-171195"
	I0311 21:19:22.962053   54656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:19:22.962107   54656 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-171195"
	W0311 21:19:22.962119   54656 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:19:22.960778   54656 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-171195"
	I0311 21:19:22.962152   54656 host.go:66] Checking if "kubernetes-upgrade-171195" exists ...
	I0311 21:19:22.962166   54656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-171195"
	I0311 21:19:22.962502   54656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:19:22.962525   54656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:19:22.962564   54656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:19:22.962589   54656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:19:22.982308   54656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0311 21:19:22.982460   54656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46255
	I0311 21:19:22.982790   54656 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:19:22.982868   54656 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:19:22.983434   54656 main.go:141] libmachine: Using API Version  1
	I0311 21:19:22.983456   54656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:19:22.983844   54656 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:19:22.984027   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetState
	I0311 21:19:22.985135   54656 main.go:141] libmachine: Using API Version  1
	I0311 21:19:22.985160   54656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:19:22.985463   54656 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:19:22.985972   54656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:19:22.986010   54656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:19:22.987419   54656 kapi.go:59] client config for kubernetes-upgrade-171195: &rest.Config{Host:"https://192.168.39.241:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/client.crt", KeyFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kubernetes-upgrade-171195/client.key", CAFile:"/home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 21:19:22.987686   54656 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-171195"
	W0311 21:19:22.987698   54656 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:19:22.987723   54656 host.go:66] Checking if "kubernetes-upgrade-171195" exists ...
	I0311 21:19:22.988007   54656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:19:22.988028   54656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:19:23.005698   54656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0311 21:19:23.005895   54656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43869
	I0311 21:19:23.006329   54656 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:19:23.006542   54656 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:19:23.007019   54656 main.go:141] libmachine: Using API Version  1
	I0311 21:19:23.007034   54656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:19:23.007163   54656 main.go:141] libmachine: Using API Version  1
	I0311 21:19:23.007185   54656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:19:23.007596   54656 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:19:23.007927   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetState
	I0311 21:19:23.007974   54656 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:19:23.008533   54656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:19:23.008567   54656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:19:23.010981   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:19:23.013898   54656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:19:23.216242   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:23.222608   54538 api_server.go:279] https://192.168.50.163:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:19:23.222641   54538 api_server.go:103] status: https://192.168.50.163:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:19:23.716121   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:23.723369   54538 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I0311 21:19:23.734577   54538 api_server.go:141] control plane version: v1.28.4
	I0311 21:19:23.734611   54538 api_server.go:131] duration metric: took 5.518627118s to wait for apiserver health ...
	I0311 21:19:23.734622   54538 cni.go:84] Creating CNI manager for ""
	I0311 21:19:23.734631   54538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:19:23.737066   54538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:19:23.015774   54656 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:19:23.015790   54656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:19:23.015808   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:19:23.023712   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:19:23.025546   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:18:19 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:19:23.025567   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:19:23.025985   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:19:23.026605   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:19:23.026726   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:19:23.026821   54656 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa Username:docker}
	I0311 21:19:23.029087   54656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I0311 21:19:23.029427   54656 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:19:23.029836   54656 main.go:141] libmachine: Using API Version  1
	I0311 21:19:23.029851   54656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:19:23.030160   54656 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:19:23.030310   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetState
	I0311 21:19:23.031893   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .DriverName
	I0311 21:19:23.032137   54656 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:19:23.032152   54656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:19:23.032167   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHHostname
	I0311 21:19:23.035472   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:19:23.036136   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:90:45", ip: ""} in network mk-kubernetes-upgrade-171195: {Iface:virbr1 ExpiryTime:2024-03-11 22:18:19 +0000 UTC Type:0 Mac:52:54:00:08:90:45 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:kubernetes-upgrade-171195 Clientid:01:52:54:00:08:90:45}
	I0311 21:19:23.036167   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | domain kubernetes-upgrade-171195 has defined IP address 192.168.39.241 and MAC address 52:54:00:08:90:45 in network mk-kubernetes-upgrade-171195
	I0311 21:19:23.036337   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHPort
	I0311 21:19:23.036525   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHKeyPath
	I0311 21:19:23.036696   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .GetSSHUsername
	I0311 21:19:23.036987   54656 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/kubernetes-upgrade-171195/id_rsa Username:docker}
	I0311 21:19:23.267806   54656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:19:23.294181   54656 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:19:23.294259   54656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:19:23.316089   54656 api_server.go:72] duration metric: took 356.98559ms to wait for apiserver process to appear ...
	I0311 21:19:23.316113   54656 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:19:23.316133   54656 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0311 21:19:23.323334   54656 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
	ok
	I0311 21:19:23.326324   54656 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:19:23.326347   54656 api_server.go:131] duration metric: took 10.226826ms to wait for apiserver health ...
	I0311 21:19:23.326357   54656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:19:23.335162   54656 system_pods.go:59] 8 kube-system pods found
	I0311 21:19:23.335220   54656 system_pods.go:61] "coredns-76f75df574-h742x" [4fc64d72-cefb-455d-8082-36b07f597d3c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:19:23.335234   54656 system_pods.go:61] "coredns-76f75df574-mfzpc" [c393371c-7907-4438-9010-f0c291348e0a] Running
	I0311 21:19:23.335251   54656 system_pods.go:61] "etcd-kubernetes-upgrade-171195" [faa1b6c0-8780-4a70-bdae-d03aef713ffb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:19:23.335267   54656 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-171195" [28789413-3e71-4e1a-ba38-af8aec4a8669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:19:23.335297   54656 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-171195" [547075de-ad59-43da-99ec-e6cd5a77e967] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:19:23.335311   54656 system_pods.go:61] "kube-proxy-jqt67" [0559e2d7-cd0d-4342-8fd8-9957a2b64a83] Running
	I0311 21:19:23.335322   54656 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-171195" [b7a732b1-5103-4f9d-a573-19f9ef184094] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:19:23.335332   54656 system_pods.go:61] "storage-provisioner" [a8e32a95-4ea1-439a-9f19-31f905d595f3] Running
	I0311 21:19:23.335343   54656 system_pods.go:74] duration metric: took 8.980807ms to wait for pod list to return data ...
	I0311 21:19:23.335368   54656 kubeadm.go:576] duration metric: took 376.259221ms to wait for: map[apiserver:true system_pods:true]
	I0311 21:19:23.335388   54656 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:19:23.338812   54656 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:19:23.338833   54656 node_conditions.go:123] node cpu capacity is 2
	I0311 21:19:23.338849   54656 node_conditions.go:105] duration metric: took 3.444242ms to run NodePressure ...
	I0311 21:19:23.338862   54656 start.go:240] waiting for startup goroutines ...
	I0311 21:19:23.362534   54656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:19:23.391269   54656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:19:24.281558   54656 main.go:141] libmachine: Making call to close driver server
	I0311 21:19:24.281586   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .Close
	I0311 21:19:24.282043   54656 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:19:24.282083   54656 main.go:141] libmachine: Making call to close driver server
	I0311 21:19:24.282122   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Closing plugin on server side
	I0311 21:19:24.282201   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .Close
	I0311 21:19:24.282114   54656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:19:24.282252   54656 main.go:141] libmachine: Making call to close driver server
	I0311 21:19:24.282259   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .Close
	I0311 21:19:24.282645   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Closing plugin on server side
	I0311 21:19:24.282678   54656 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:19:24.282685   54656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:19:24.282684   54656 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:19:24.282696   54656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:19:24.282698   54656 main.go:141] libmachine: Making call to close driver server
	I0311 21:19:24.282706   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .Close
	I0311 21:19:24.282888   54656 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:19:24.282899   54656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:19:24.282993   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Closing plugin on server side
	I0311 21:19:24.294546   54656 main.go:141] libmachine: Making call to close driver server
	I0311 21:19:24.294567   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) Calling .Close
	I0311 21:19:24.294810   54656 main.go:141] libmachine: (kubernetes-upgrade-171195) DBG | Closing plugin on server side
	I0311 21:19:24.294840   54656 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:19:24.294849   54656 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:19:24.296935   54656 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0311 21:19:24.298164   54656 addons.go:505] duration metric: took 1.338906844s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0311 21:19:24.298205   54656 start.go:245] waiting for cluster config update ...
	I0311 21:19:24.298225   54656 start.go:254] writing updated cluster config ...
	I0311 21:19:24.298471   54656 ssh_runner.go:195] Run: rm -f paused
	I0311 21:19:24.365154   54656 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:19:24.367090   54656 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-171195" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.247355304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191965247314116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74977f4f-40c8-4c44-b21a-365af3eb50ba name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.248273876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47cb3ead-d1c6-436a-a199-604de1f36022 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.248387616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47cb3ead-d1c6-436a-a199-604de1f36022 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.249333076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a777efc587ccaf111497c2901fb17a5876739d403589d9cc95305485288a221,PodSandboxId:e014f43ac4474323f692e4f329f3843437a6d299daf89fd3e6af85f9a0e68a37,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710191961148076415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mfzpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393371c-7907-4438-9010-f0c291348e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 41d64d1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e9a76cb379e8c8bdedbe4dd35fd1fc3ed9c983966b456f58a71268704cc242,PodSandboxId:111f33704d8e3f5af0cab09eb0b5d205fb21032b219f08e23af5a4e38640bf09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710191961116781174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqt67,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0559e2d7-cd0d-4342-8fd8-9957a2b64a83,},Annotations:map[string]string{io.kubernetes.container.hash: 722c436d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8267a95756428f55a07338891777778c90f82fbbaf7b0ef068bfd43aa40a14df,PodSandboxId:8caa44ffd5915897ecc1692e127926b3586a2b0bf02137550d737892fc57cd1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710191961128359399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a8e32a95-4ea1-439a-9f19-31f905d595f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8547da88,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8eb7c9e52a080e4b6a9dae213336ea852eb2ecc7bb5a6bd96ee8706b6594656,PodSandboxId:03298645672a8a04debf1d67687434de1de68714737c5bc6debb53635257178f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710191961092584419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h742x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc64d72-cefb-455d-8082-36
b07f597d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e597afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc161fe6949f9d7fb3cb2c86d8424643790be4b8c8837097a9fd9ba558ba15a0,PodSandboxId:6f1ac93173dbd79a3d0e79a774eefb4774677789b0d6a2d9d12b3323a4205361,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710191956527903054,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9014c8b72bef43e177791eb61d2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b51f66b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f88548d817de01f0c08dc28f3d5360e0bdb307d461722d137797666242a8ed,PodSandboxId:ef1551cf60ab0a6164fd0c741c1d7307bf57d262cf9efa9a741778f658056a14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710191956505998956,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b21e71a2cdba05c5485a45d9d9df13,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef384fd237e584f790e9b06cc45cb6c66600759956170aaa03ab61e5f3fd783a,PodSandboxId:3e5d9eb03bcebe9d1e1d66929ad5f36110aef25edf745069427b139dc8505592,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710191956510698667,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22c85281c6f59bebb56ecc1d61106730,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dcffd4f05d2af2dbe377282439d14eca6968ff058e0b557584097c56d11c88,PodSandboxId:84f973989f0457ccc397c0b6fc050140da958932f022c61ebc46b1547aafd204,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710191956477101822,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100de7ee63318c806a60db01f0e53dae,},Annotations:map[string]string{io.kubernetes.container.hash: b08e25d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d182a0315037aa7626d69a740d9576011f0f36ae9d2c42ea12e1617f05c20396,PodSandboxId:f92438924fc8208afe7173e531fbed44fba2dc07943f5200e0bee5e6f2988420,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710191949662360552,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqt67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0559e2d7-cd0d-4342-8fd8-9957a2b64a83,},Annotations:map[string]string{io.kubernetes.container.hash: 722c436d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278ddc87f9a076fbb1679cdc3337faa9853955ccc6836e66b065c6bb1365dc82,PodSandboxId:aad7f4dfc524b47193cab61915ca499ec52b2728ee0d2e77992c2f63c5100ddb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710191950770195166,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-76f75df574-h742x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc64d72-cefb-455d-8082-36b07f597d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e597afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454c026959fea19f32b810d913682433120e2dfd7b19bff1988a7eacb04a09c7,PodSandboxId:f0a9e40d26082f3dd6a86ba83dd78174da1ea8ac2736b57cd2528d42de6d04e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710191950087935944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8e32a95-4ea1-439a-9f19-31f905d595f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8547da88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5038326f1eb8b617cd53ad66d2a7732dae012180d8bf81f1cdc00cd2e8a292a9,PodSandboxId:70192ad4f44f7468632293a8e428b3a8290a59a0a871042d7b33f6241caa8d5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed
15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710191949895409972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9014c8b72bef43e177791eb61d2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b51f66b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6e9ff2a3f8b4b36b5dcebe15966a2180b64425f0ec4d626371d3f9e666b38c,PodSandboxId:692aba41a8ec790a842c9b17ee0d4806dd1d808c75e7a42b90c6239a331dff61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308
c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710191949889893079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100de7ee63318c806a60db01f0e53dae,},Annotations:map[string]string{io.kubernetes.container.hash: b08e25d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ba2e79a7302a27c6ce4efe8d60cd8659eebf08c886ff0a860598e7a61e585a,PodSandboxId:e83de443b8bf0876492de197e14c0d768064170d1b7d266a6993e9a7037536a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6
032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710191949761339064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b21e71a2cdba05c5485a45d9d9df13,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77fa53e4a50db7a01c96ea987de67a0ad0cecb76674fb4ecfb197b7c08c1260a,PodSandboxId:3ae929e26ec1ce7d742d518b349f09e7e5f26d3ad09724d07a5bb38e019d3348,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f4
44e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710191949392308322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22c85281c6f59bebb56ecc1d61106730,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175d4073dbd7b5d37bae2d67ee7f4b633c09d87b337a8fcdc9acdf7c4686b9d0,PodSandboxId:a19ff95ed88068d8650b85434592a95010020be680b8844c66e16893661fa036,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710191936535302496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mfzpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393371c-7907-4438-9010-f0c291348e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 41d64d1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47cb3ead-d1c6-436a-a199-604de1f36022 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.316121383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f976100-89b2-453c-b1b7-8dff87889161 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.316256452Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f976100-89b2-453c-b1b7-8dff87889161 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.317909413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=394a1f6a-ce9f-402f-9b82-d7b4698dded6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.318434271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191965318404826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=394a1f6a-ce9f-402f-9b82-d7b4698dded6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.319434731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d210d82-ce65-460d-bd52-e1dc2cb023fb name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.319748462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d210d82-ce65-460d-bd52-e1dc2cb023fb name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.324979976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a777efc587ccaf111497c2901fb17a5876739d403589d9cc95305485288a221,PodSandboxId:e014f43ac4474323f692e4f329f3843437a6d299daf89fd3e6af85f9a0e68a37,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710191961148076415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mfzpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393371c-7907-4438-9010-f0c291348e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 41d64d1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e9a76cb379e8c8bdedbe4dd35fd1fc3ed9c983966b456f58a71268704cc242,PodSandboxId:111f33704d8e3f5af0cab09eb0b5d205fb21032b219f08e23af5a4e38640bf09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710191961116781174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqt67,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0559e2d7-cd0d-4342-8fd8-9957a2b64a83,},Annotations:map[string]string{io.kubernetes.container.hash: 722c436d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8267a95756428f55a07338891777778c90f82fbbaf7b0ef068bfd43aa40a14df,PodSandboxId:8caa44ffd5915897ecc1692e127926b3586a2b0bf02137550d737892fc57cd1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710191961128359399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a8e32a95-4ea1-439a-9f19-31f905d595f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8547da88,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8eb7c9e52a080e4b6a9dae213336ea852eb2ecc7bb5a6bd96ee8706b6594656,PodSandboxId:03298645672a8a04debf1d67687434de1de68714737c5bc6debb53635257178f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710191961092584419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h742x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc64d72-cefb-455d-8082-36
b07f597d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e597afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc161fe6949f9d7fb3cb2c86d8424643790be4b8c8837097a9fd9ba558ba15a0,PodSandboxId:6f1ac93173dbd79a3d0e79a774eefb4774677789b0d6a2d9d12b3323a4205361,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710191956527903054,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9014c8b72bef43e177791eb61d2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b51f66b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f88548d817de01f0c08dc28f3d5360e0bdb307d461722d137797666242a8ed,PodSandboxId:ef1551cf60ab0a6164fd0c741c1d7307bf57d262cf9efa9a741778f658056a14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710191956505998956,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b21e71a2cdba05c5485a45d9d9df13,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef384fd237e584f790e9b06cc45cb6c66600759956170aaa03ab61e5f3fd783a,PodSandboxId:3e5d9eb03bcebe9d1e1d66929ad5f36110aef25edf745069427b139dc8505592,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710191956510698667,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22c85281c6f59bebb56ecc1d61106730,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dcffd4f05d2af2dbe377282439d14eca6968ff058e0b557584097c56d11c88,PodSandboxId:84f973989f0457ccc397c0b6fc050140da958932f022c61ebc46b1547aafd204,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710191956477101822,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100de7ee63318c806a60db01f0e53dae,},Annotations:map[string]string{io.kubernetes.container.hash: b08e25d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d182a0315037aa7626d69a740d9576011f0f36ae9d2c42ea12e1617f05c20396,PodSandboxId:f92438924fc8208afe7173e531fbed44fba2dc07943f5200e0bee5e6f2988420,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710191949662360552,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqt67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0559e2d7-cd0d-4342-8fd8-9957a2b64a83,},Annotations:map[string]string{io.kubernetes.container.hash: 722c436d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278ddc87f9a076fbb1679cdc3337faa9853955ccc6836e66b065c6bb1365dc82,PodSandboxId:aad7f4dfc524b47193cab61915ca499ec52b2728ee0d2e77992c2f63c5100ddb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710191950770195166,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-76f75df574-h742x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc64d72-cefb-455d-8082-36b07f597d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e597afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454c026959fea19f32b810d913682433120e2dfd7b19bff1988a7eacb04a09c7,PodSandboxId:f0a9e40d26082f3dd6a86ba83dd78174da1ea8ac2736b57cd2528d42de6d04e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710191950087935944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8e32a95-4ea1-439a-9f19-31f905d595f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8547da88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5038326f1eb8b617cd53ad66d2a7732dae012180d8bf81f1cdc00cd2e8a292a9,PodSandboxId:70192ad4f44f7468632293a8e428b3a8290a59a0a871042d7b33f6241caa8d5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed
15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710191949895409972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9014c8b72bef43e177791eb61d2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b51f66b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6e9ff2a3f8b4b36b5dcebe15966a2180b64425f0ec4d626371d3f9e666b38c,PodSandboxId:692aba41a8ec790a842c9b17ee0d4806dd1d808c75e7a42b90c6239a331dff61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308
c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710191949889893079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100de7ee63318c806a60db01f0e53dae,},Annotations:map[string]string{io.kubernetes.container.hash: b08e25d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ba2e79a7302a27c6ce4efe8d60cd8659eebf08c886ff0a860598e7a61e585a,PodSandboxId:e83de443b8bf0876492de197e14c0d768064170d1b7d266a6993e9a7037536a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6
032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710191949761339064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b21e71a2cdba05c5485a45d9d9df13,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77fa53e4a50db7a01c96ea987de67a0ad0cecb76674fb4ecfb197b7c08c1260a,PodSandboxId:3ae929e26ec1ce7d742d518b349f09e7e5f26d3ad09724d07a5bb38e019d3348,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f4
44e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710191949392308322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22c85281c6f59bebb56ecc1d61106730,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175d4073dbd7b5d37bae2d67ee7f4b633c09d87b337a8fcdc9acdf7c4686b9d0,PodSandboxId:a19ff95ed88068d8650b85434592a95010020be680b8844c66e16893661fa036,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710191936535302496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mfzpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393371c-7907-4438-9010-f0c291348e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 41d64d1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d210d82-ce65-460d-bd52-e1dc2cb023fb name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.388395763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3534b57-f4c6-4ecd-bbc9-a8c6c62b823a name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.388620463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3534b57-f4c6-4ecd-bbc9-a8c6c62b823a name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.390802373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d27d9079-2c70-4b3f-a12b-67264c68ee98 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.391379970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191965391346806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d27d9079-2c70-4b3f-a12b-67264c68ee98 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.392078413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d522b5e5-e25e-43b4-87eb-7fe76a9f2e69 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.392135955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d522b5e5-e25e-43b4-87eb-7fe76a9f2e69 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.392454420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a777efc587ccaf111497c2901fb17a5876739d403589d9cc95305485288a221,PodSandboxId:e014f43ac4474323f692e4f329f3843437a6d299daf89fd3e6af85f9a0e68a37,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710191961148076415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mfzpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393371c-7907-4438-9010-f0c291348e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 41d64d1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e9a76cb379e8c8bdedbe4dd35fd1fc3ed9c983966b456f58a71268704cc242,PodSandboxId:111f33704d8e3f5af0cab09eb0b5d205fb21032b219f08e23af5a4e38640bf09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710191961116781174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqt67,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0559e2d7-cd0d-4342-8fd8-9957a2b64a83,},Annotations:map[string]string{io.kubernetes.container.hash: 722c436d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8267a95756428f55a07338891777778c90f82fbbaf7b0ef068bfd43aa40a14df,PodSandboxId:8caa44ffd5915897ecc1692e127926b3586a2b0bf02137550d737892fc57cd1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710191961128359399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a8e32a95-4ea1-439a-9f19-31f905d595f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8547da88,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8eb7c9e52a080e4b6a9dae213336ea852eb2ecc7bb5a6bd96ee8706b6594656,PodSandboxId:03298645672a8a04debf1d67687434de1de68714737c5bc6debb53635257178f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710191961092584419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h742x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc64d72-cefb-455d-8082-36
b07f597d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e597afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc161fe6949f9d7fb3cb2c86d8424643790be4b8c8837097a9fd9ba558ba15a0,PodSandboxId:6f1ac93173dbd79a3d0e79a774eefb4774677789b0d6a2d9d12b3323a4205361,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710191956527903054,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9014c8b72bef43e177791eb61d2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b51f66b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f88548d817de01f0c08dc28f3d5360e0bdb307d461722d137797666242a8ed,PodSandboxId:ef1551cf60ab0a6164fd0c741c1d7307bf57d262cf9efa9a741778f658056a14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710191956505998956,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b21e71a2cdba05c5485a45d9d9df13,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef384fd237e584f790e9b06cc45cb6c66600759956170aaa03ab61e5f3fd783a,PodSandboxId:3e5d9eb03bcebe9d1e1d66929ad5f36110aef25edf745069427b139dc8505592,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710191956510698667,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22c85281c6f59bebb56ecc1d61106730,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dcffd4f05d2af2dbe377282439d14eca6968ff058e0b557584097c56d11c88,PodSandboxId:84f973989f0457ccc397c0b6fc050140da958932f022c61ebc46b1547aafd204,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710191956477101822,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100de7ee63318c806a60db01f0e53dae,},Annotations:map[string]string{io.kubernetes.container.hash: b08e25d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d182a0315037aa7626d69a740d9576011f0f36ae9d2c42ea12e1617f05c20396,PodSandboxId:f92438924fc8208afe7173e531fbed44fba2dc07943f5200e0bee5e6f2988420,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710191949662360552,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqt67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0559e2d7-cd0d-4342-8fd8-9957a2b64a83,},Annotations:map[string]string{io.kubernetes.container.hash: 722c436d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278ddc87f9a076fbb1679cdc3337faa9853955ccc6836e66b065c6bb1365dc82,PodSandboxId:aad7f4dfc524b47193cab61915ca499ec52b2728ee0d2e77992c2f63c5100ddb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710191950770195166,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-76f75df574-h742x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc64d72-cefb-455d-8082-36b07f597d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e597afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454c026959fea19f32b810d913682433120e2dfd7b19bff1988a7eacb04a09c7,PodSandboxId:f0a9e40d26082f3dd6a86ba83dd78174da1ea8ac2736b57cd2528d42de6d04e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710191950087935944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8e32a95-4ea1-439a-9f19-31f905d595f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8547da88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5038326f1eb8b617cd53ad66d2a7732dae012180d8bf81f1cdc00cd2e8a292a9,PodSandboxId:70192ad4f44f7468632293a8e428b3a8290a59a0a871042d7b33f6241caa8d5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed
15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710191949895409972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9014c8b72bef43e177791eb61d2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b51f66b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6e9ff2a3f8b4b36b5dcebe15966a2180b64425f0ec4d626371d3f9e666b38c,PodSandboxId:692aba41a8ec790a842c9b17ee0d4806dd1d808c75e7a42b90c6239a331dff61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308
c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710191949889893079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100de7ee63318c806a60db01f0e53dae,},Annotations:map[string]string{io.kubernetes.container.hash: b08e25d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ba2e79a7302a27c6ce4efe8d60cd8659eebf08c886ff0a860598e7a61e585a,PodSandboxId:e83de443b8bf0876492de197e14c0d768064170d1b7d266a6993e9a7037536a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6
032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710191949761339064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b21e71a2cdba05c5485a45d9d9df13,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77fa53e4a50db7a01c96ea987de67a0ad0cecb76674fb4ecfb197b7c08c1260a,PodSandboxId:3ae929e26ec1ce7d742d518b349f09e7e5f26d3ad09724d07a5bb38e019d3348,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f4
44e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710191949392308322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22c85281c6f59bebb56ecc1d61106730,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175d4073dbd7b5d37bae2d67ee7f4b633c09d87b337a8fcdc9acdf7c4686b9d0,PodSandboxId:a19ff95ed88068d8650b85434592a95010020be680b8844c66e16893661fa036,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710191936535302496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mfzpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393371c-7907-4438-9010-f0c291348e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 41d64d1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d522b5e5-e25e-43b4-87eb-7fe76a9f2e69 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.442614923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df8d7557-9c32-4764-ad35-fbbddcc2e7bb name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.442746800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df8d7557-9c32-4764-ad35-fbbddcc2e7bb name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.444723952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6421ab8f-9323-49fd-ab87-e32bb5a9dcbf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.445221755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191965445191417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6421ab8f-9323-49fd-ab87-e32bb5a9dcbf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.446678238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddeb8555-dde3-4ab2-914c-b650a150447e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.446753145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddeb8555-dde3-4ab2-914c-b650a150447e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:25 kubernetes-upgrade-171195 crio[2764]: time="2024-03-11 21:19:25.447270910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a777efc587ccaf111497c2901fb17a5876739d403589d9cc95305485288a221,PodSandboxId:e014f43ac4474323f692e4f329f3843437a6d299daf89fd3e6af85f9a0e68a37,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710191961148076415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mfzpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393371c-7907-4438-9010-f0c291348e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 41d64d1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e9a76cb379e8c8bdedbe4dd35fd1fc3ed9c983966b456f58a71268704cc242,PodSandboxId:111f33704d8e3f5af0cab09eb0b5d205fb21032b219f08e23af5a4e38640bf09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710191961116781174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqt67,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0559e2d7-cd0d-4342-8fd8-9957a2b64a83,},Annotations:map[string]string{io.kubernetes.container.hash: 722c436d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8267a95756428f55a07338891777778c90f82fbbaf7b0ef068bfd43aa40a14df,PodSandboxId:8caa44ffd5915897ecc1692e127926b3586a2b0bf02137550d737892fc57cd1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710191961128359399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a8e32a95-4ea1-439a-9f19-31f905d595f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8547da88,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8eb7c9e52a080e4b6a9dae213336ea852eb2ecc7bb5a6bd96ee8706b6594656,PodSandboxId:03298645672a8a04debf1d67687434de1de68714737c5bc6debb53635257178f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710191961092584419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h742x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc64d72-cefb-455d-8082-36
b07f597d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e597afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc161fe6949f9d7fb3cb2c86d8424643790be4b8c8837097a9fd9ba558ba15a0,PodSandboxId:6f1ac93173dbd79a3d0e79a774eefb4774677789b0d6a2d9d12b3323a4205361,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710191956527903054,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9014c8b72bef43e177791eb61d2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b51f66b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98f88548d817de01f0c08dc28f3d5360e0bdb307d461722d137797666242a8ed,PodSandboxId:ef1551cf60ab0a6164fd0c741c1d7307bf57d262cf9efa9a741778f658056a14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710191956505998956,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b21e71a2cdba05c5485a45d9d9df13,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef384fd237e584f790e9b06cc45cb6c66600759956170aaa03ab61e5f3fd783a,PodSandboxId:3e5d9eb03bcebe9d1e1d66929ad5f36110aef25edf745069427b139dc8505592,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710191956510698667,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22c85281c6f59bebb56ecc1d61106730,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dcffd4f05d2af2dbe377282439d14eca6968ff058e0b557584097c56d11c88,PodSandboxId:84f973989f0457ccc397c0b6fc050140da958932f022c61ebc46b1547aafd204,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710191956477101822,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100de7ee63318c806a60db01f0e53dae,},Annotations:map[string]string{io.kubernetes.container.hash: b08e25d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d182a0315037aa7626d69a740d9576011f0f36ae9d2c42ea12e1617f05c20396,PodSandboxId:f92438924fc8208afe7173e531fbed44fba2dc07943f5200e0bee5e6f2988420,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710191949662360552,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jqt67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0559e2d7-cd0d-4342-8fd8-9957a2b64a83,},Annotations:map[string]string{io.kubernetes.container.hash: 722c436d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:278ddc87f9a076fbb1679cdc3337faa9853955ccc6836e66b065c6bb1365dc82,PodSandboxId:aad7f4dfc524b47193cab61915ca499ec52b2728ee0d2e77992c2f63c5100ddb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710191950770195166,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-76f75df574-h742x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc64d72-cefb-455d-8082-36b07f597d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 2e597afa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454c026959fea19f32b810d913682433120e2dfd7b19bff1988a7eacb04a09c7,PodSandboxId:f0a9e40d26082f3dd6a86ba83dd78174da1ea8ac2736b57cd2528d42de6d04e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710191950087935944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8e32a95-4ea1-439a-9f19-31f905d595f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8547da88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5038326f1eb8b617cd53ad66d2a7732dae012180d8bf81f1cdc00cd2e8a292a9,PodSandboxId:70192ad4f44f7468632293a8e428b3a8290a59a0a871042d7b33f6241caa8d5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed
15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710191949895409972,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9014c8b72bef43e177791eb61d2f7b,},Annotations:map[string]string{io.kubernetes.container.hash: 7b51f66b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6e9ff2a3f8b4b36b5dcebe15966a2180b64425f0ec4d626371d3f9e666b38c,PodSandboxId:692aba41a8ec790a842c9b17ee0d4806dd1d808c75e7a42b90c6239a331dff61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308
c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710191949889893079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100de7ee63318c806a60db01f0e53dae,},Annotations:map[string]string{io.kubernetes.container.hash: b08e25d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ba2e79a7302a27c6ce4efe8d60cd8659eebf08c886ff0a860598e7a61e585a,PodSandboxId:e83de443b8bf0876492de197e14c0d768064170d1b7d266a6993e9a7037536a5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6
032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710191949761339064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b21e71a2cdba05c5485a45d9d9df13,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77fa53e4a50db7a01c96ea987de67a0ad0cecb76674fb4ecfb197b7c08c1260a,PodSandboxId:3ae929e26ec1ce7d742d518b349f09e7e5f26d3ad09724d07a5bb38e019d3348,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f4
44e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710191949392308322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22c85281c6f59bebb56ecc1d61106730,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175d4073dbd7b5d37bae2d67ee7f4b633c09d87b337a8fcdc9acdf7c4686b9d0,PodSandboxId:a19ff95ed88068d8650b85434592a95010020be680b8844c66e16893661fa036,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710191936535302496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mfzpc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c393371c-7907-4438-9010-f0c291348e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 41d64d1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ddeb8555-dde3-4ab2-914c-b650a150447e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a777efc587cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   1                   e014f43ac4474       coredns-76f75df574-mfzpc
	8267a95756428       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   8caa44ffd5915       storage-provisioner
	98e9a76cb379e       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   4 seconds ago       Running             kube-proxy                2                   111f33704d8e3       kube-proxy-jqt67
	b8eb7c9e52a08       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   2                   03298645672a8       coredns-76f75df574-h742x
	cc161fe6949f9       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 seconds ago       Running             etcd                      2                   6f1ac93173dbd       etcd-kubernetes-upgrade-171195
	ef384fd237e58       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 seconds ago       Running             kube-controller-manager   2                   3e5d9eb03bceb       kube-controller-manager-kubernetes-upgrade-171195
	98f88548d817d       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 seconds ago       Running             kube-scheduler            2                   ef1551cf60ab0       kube-scheduler-kubernetes-upgrade-171195
	e4dcffd4f05d2       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 seconds ago       Running             kube-apiserver            2                   84f973989f045       kube-apiserver-kubernetes-upgrade-171195
	278ddc87f9a07       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 seconds ago      Exited              coredns                   1                   aad7f4dfc524b       coredns-76f75df574-h742x
	454c026959fea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       1                   f0a9e40d26082       storage-provisioner
	5038326f1eb8b       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   15 seconds ago      Exited              etcd                      1                   70192ad4f44f7       etcd-kubernetes-upgrade-171195
	6d6e9ff2a3f8b       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   15 seconds ago      Exited              kube-apiserver            1                   692aba41a8ec7       kube-apiserver-kubernetes-upgrade-171195
	27ba2e79a7302       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   15 seconds ago      Exited              kube-scheduler            1                   e83de443b8bf0       kube-scheduler-kubernetes-upgrade-171195
	d182a0315037a       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   15 seconds ago      Exited              kube-proxy                1                   f92438924fc82       kube-proxy-jqt67
	77fa53e4a50db       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   16 seconds ago      Exited              kube-controller-manager   1                   3ae929e26ec1c       kube-controller-manager-kubernetes-upgrade-171195
	175d4073dbd7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Exited              coredns                   0                   a19ff95ed8806       coredns-76f75df574-mfzpc
	
	
	==> coredns [175d4073dbd7b5d37bae2d67ee7f4b633c09d87b337a8fcdc9acdf7c4686b9d0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [278ddc87f9a076fbb1679cdc3337faa9853955ccc6836e66b065c6bb1365dc82] <==
	
	
	==> coredns [2a777efc587ccaf111497c2901fb17a5876739d403589d9cc95305485288a221] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b8eb7c9e52a080e4b6a9dae213336ea852eb2ecc7bb5a6bd96ee8706b6594656] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-171195
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-171195
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:18:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-171195
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:19:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:19:20 +0000   Mon, 11 Mar 2024 21:18:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:19:20 +0000   Mon, 11 Mar 2024 21:18:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:19:20 +0000   Mon, 11 Mar 2024 21:18:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:19:20 +0000   Mon, 11 Mar 2024 21:18:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    kubernetes-upgrade-171195
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 004f54e7f7a24666a85f2e8bd539c0e9
	  System UUID:                004f54e7-f7a2-4666-a85f-2e8bd539c0e9
	  Boot ID:                    6696a814-64a8-47df-8a01-6884b1095a3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-h742x                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     30s
	  kube-system                 coredns-76f75df574-mfzpc                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     30s
	  kube-system                 etcd-kubernetes-upgrade-171195                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         37s
	  kube-system                 kube-apiserver-kubernetes-upgrade-171195             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-171195    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-proxy-jqt67                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-kubernetes-upgrade-171195             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node kubernetes-upgrade-171195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node kubernetes-upgrade-171195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 50s)  kubelet          Node kubernetes-upgrade-171195 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  50s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                node-controller  Node kubernetes-upgrade-171195 event: Registered Node kubernetes-upgrade-171195 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.634152] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.069066] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075119] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.205330] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.144510] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.263039] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +5.469959] systemd-fstab-generator[731]: Ignoring "noauto" option for root device
	[  +0.081994] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.132650] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[ +13.431853] systemd-fstab-generator[1240]: Ignoring "noauto" option for root device
	[  +0.087178] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.005542] kauditd_printk_skb: 21 callbacks suppressed
	[Mar11 21:19] systemd-fstab-generator[2032]: Ignoring "noauto" option for root device
	[  +0.103806] kauditd_printk_skb: 64 callbacks suppressed
	[  +0.080767] systemd-fstab-generator[2045]: Ignoring "noauto" option for root device
	[  +0.605978] systemd-fstab-generator[2220]: Ignoring "noauto" option for root device
	[  +0.451362] systemd-fstab-generator[2372]: Ignoring "noauto" option for root device
	[  +0.728054] systemd-fstab-generator[2557]: Ignoring "noauto" option for root device
	[  +1.358330] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +3.539592] systemd-fstab-generator[3449]: Ignoring "noauto" option for root device
	[  +0.113341] kauditd_printk_skb: 280 callbacks suppressed
	[  +5.645324] kauditd_printk_skb: 40 callbacks suppressed
	[  +1.826926] systemd-fstab-generator[3950]: Ignoring "noauto" option for root device
	
	
	==> etcd [5038326f1eb8b617cd53ad66d2a7732dae012180d8bf81f1cdc00cd2e8a292a9] <==
	{"level":"info","ts":"2024-03-11T21:19:10.860439Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"9.360679ms"}
	{"level":"info","ts":"2024-03-11T21:19:10.87444Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-11T21:19:10.883039Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","commit-index":392}
	{"level":"info","ts":"2024-03-11T21:19:10.883187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-11T21:19:10.883207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became follower at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:10.88328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 84111105ea0e8722 [peers: [], term: 2, commit: 392, applied: 0, lastindex: 392, lastterm: 2]"}
	{"level":"warn","ts":"2024-03-11T21:19:10.895767Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-03-11T21:19:10.926306Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":382}
	{"level":"info","ts":"2024-03-11T21:19:10.930863Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-03-11T21:19:10.939127Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"84111105ea0e8722","timeout":"7s"}
	{"level":"info","ts":"2024-03-11T21:19:10.939363Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"84111105ea0e8722"}
	{"level":"info","ts":"2024-03-11T21:19:10.939433Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"84111105ea0e8722","local-server-version":"3.5.10","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-11T21:19:10.941744Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T21:19:10.941911Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"84111105ea0e8722","initial-advertise-peer-urls":["https://192.168.39.241:2380"],"listen-peer-urls":["https://192.168.39.241:2380"],"advertise-client-urls":["https://192.168.39.241:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.241:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T21:19:10.941966Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T21:19:10.942059Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-11T21:19:10.942193Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:10.942261Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:10.942275Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:10.942461Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-03-11T21:19:10.950191Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-03-11T21:19:10.950113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 switched to configuration voters=(9516406204709898018)"}
	{"level":"info","ts":"2024-03-11T21:19:10.950428Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","added-peer-id":"84111105ea0e8722","added-peer-peer-urls":["https://192.168.39.241:2380"]}
	{"level":"info","ts":"2024-03-11T21:19:10.950638Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:19:10.950667Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> etcd [cc161fe6949f9d7fb3cb2c86d8424643790be4b8c8837097a9fd9ba558ba15a0] <==
	{"level":"info","ts":"2024-03-11T21:19:17.2307Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","added-peer-id":"84111105ea0e8722","added-peer-peer-urls":["https://192.168.39.241:2380"]}
	{"level":"info","ts":"2024-03-11T21:19:17.232768Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"73137fd659599d","local-member-id":"84111105ea0e8722","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:19:17.232873Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:19:17.23884Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:17.241568Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:17.241751Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:17.261559Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T21:19:17.262616Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-03-11T21:19:17.268798Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.241:2380"}
	{"level":"info","ts":"2024-03-11T21:19:17.264219Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"84111105ea0e8722","initial-advertise-peer-urls":["https://192.168.39.241:2380"],"listen-peer-urls":["https://192.168.39.241:2380"],"advertise-client-urls":["https://192.168.39.241:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.241:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T21:19:17.264325Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T21:19:18.812809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:18.812988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:18.813066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 received MsgPreVoteResp from 84111105ea0e8722 at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:18.813131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became candidate at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:18.813208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 received MsgVoteResp from 84111105ea0e8722 at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:18.813251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"84111105ea0e8722 became leader at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:18.813293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 84111105ea0e8722 elected leader 84111105ea0e8722 at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:18.819419Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"84111105ea0e8722","local-member-attributes":"{Name:kubernetes-upgrade-171195 ClientURLs:[https://192.168.39.241:2379]}","request-path":"/0/members/84111105ea0e8722/attributes","cluster-id":"73137fd659599d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T21:19:18.819636Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:19:18.820034Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T21:19:18.820091Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T21:19:18.820277Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:19:18.822984Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T21:19:18.824927Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.241:2379"}
	
	
	==> kernel <==
	 21:19:26 up 1 min,  0 users,  load average: 2.19, 0.62, 0.21
	Linux kubernetes-upgrade-171195 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6d6e9ff2a3f8b4b36b5dcebe15966a2180b64425f0ec4d626371d3f9e666b38c] <==
	
	
	==> kube-apiserver [e4dcffd4f05d2af2dbe377282439d14eca6968ff058e0b557584097c56d11c88] <==
	I0311 21:19:20.467703       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0311 21:19:20.467841       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0311 21:19:20.468732       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0311 21:19:20.468781       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0311 21:19:20.571795       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 21:19:20.598218       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0311 21:19:20.609863       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 21:19:20.610694       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 21:19:20.612235       1 aggregator.go:165] initial CRD sync complete...
	I0311 21:19:20.612289       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 21:19:20.612300       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 21:19:20.612308       1 cache.go:39] Caches are synced for autoregister controller
	I0311 21:19:20.623960       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0311 21:19:20.624118       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0311 21:19:20.646238       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 21:19:20.647850       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 21:19:20.654629       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0311 21:19:20.685045       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0311 21:19:21.458831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 21:19:21.579852       1 controller.go:624] quota admission added evaluator for: endpoints
	I0311 21:19:22.762353       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0311 21:19:22.789917       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0311 21:19:22.846478       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0311 21:19:22.889476       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 21:19:22.905002       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [77fa53e4a50db7a01c96ea987de67a0ad0cecb76674fb4ecfb197b7c08c1260a] <==
	
	
	==> kube-controller-manager [ef384fd237e584f790e9b06cc45cb6c66600759956170aaa03ab61e5f3fd783a] <==
	I0311 21:19:22.931012       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0311 21:19:22.931022       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0311 21:19:22.981418       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0311 21:19:22.981600       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0311 21:19:23.031718       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0311 21:19:23.031840       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0311 21:19:23.031849       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0311 21:19:23.031855       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0311 21:19:23.081000       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0311 21:19:23.081139       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0311 21:19:23.081151       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0311 21:19:23.134237       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0311 21:19:23.134457       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0311 21:19:23.134326       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0311 21:19:23.134735       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0311 21:19:23.181876       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0311 21:19:23.182408       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0311 21:19:23.182469       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0311 21:19:23.241850       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0311 21:19:23.241925       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0311 21:19:23.241936       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0311 21:19:23.330375       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0311 21:19:23.330569       1 disruption.go:433] "Sending events to api server."
	I0311 21:19:23.330661       1 disruption.go:444] "Starting disruption controller"
	I0311 21:19:23.330672       1 shared_informer.go:311] Waiting for caches to sync for disruption
	
	
	==> kube-proxy [98e9a76cb379e8c8bdedbe4dd35fd1fc3ed9c983966b456f58a71268704cc242] <==
	I0311 21:19:21.614134       1 server_others.go:72] "Using iptables proxy"
	I0311 21:19:21.664680       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	I0311 21:19:21.730422       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0311 21:19:21.730473       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:19:21.730578       1 server_others.go:168] "Using iptables Proxier"
	I0311 21:19:21.734909       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:19:21.735098       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0311 21:19:21.735108       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:19:21.736465       1 config.go:188] "Starting service config controller"
	I0311 21:19:21.736582       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:19:21.736602       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:19:21.736634       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:19:21.736998       1 config.go:315] "Starting node config controller"
	I0311 21:19:21.737040       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:19:21.837886       1 shared_informer.go:318] Caches are synced for node config
	I0311 21:19:21.837951       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:19:21.837997       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d182a0315037aa7626d69a740d9576011f0f36ae9d2c42ea12e1617f05c20396] <==
	
	
	==> kube-scheduler [27ba2e79a7302a27c6ce4efe8d60cd8659eebf08c886ff0a860598e7a61e585a] <==
	
	
	==> kube-scheduler [98f88548d817de01f0c08dc28f3d5360e0bdb307d461722d137797666242a8ed] <==
	I0311 21:19:17.950811       1 serving.go:380] Generated self-signed cert in-memory
	W0311 21:19:20.530301       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 21:19:20.530359       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:19:20.530376       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 21:19:20.530385       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 21:19:20.632799       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0311 21:19:20.632850       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:19:20.639310       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 21:19:20.639441       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 21:19:20.639456       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 21:19:20.639474       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 21:19:20.741445       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:19:16 kubernetes-upgrade-171195 kubelet[3456]: E0311 21:19:16.458145    3456 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.241:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-171195.17bbd285d352758e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-171195,UID:kubernetes-upgrade-171195,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-171195,},FirstTimestamp:2024-03-11 21:19:15.73511515 +0000 UTC m=+0.124707775,LastTimestamp:2024-03-11 21:19:15.73511515 +0000 UTC m=+0.124707775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-1711
95,}"
	Mar 11 21:19:16 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:16.489919    3456 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-171195"
	Mar 11 21:19:16 kubernetes-upgrade-171195 kubelet[3456]: E0311 21:19:16.490952    3456 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.241:8443: connect: connection refused" node="kubernetes-upgrade-171195"
	Mar 11 21:19:16 kubernetes-upgrade-171195 kubelet[3456]: W0311 21:19:16.593363    3456 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-171195&limit=500&resourceVersion=0": dial tcp 192.168.39.241:8443: connect: connection refused
	Mar 11 21:19:16 kubernetes-upgrade-171195 kubelet[3456]: E0311 21:19:16.593468    3456 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-171195&limit=500&resourceVersion=0": dial tcp 192.168.39.241:8443: connect: connection refused
	Mar 11 21:19:16 kubernetes-upgrade-171195 kubelet[3456]: W0311 21:19:16.802288    3456 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.241:8443: connect: connection refused
	Mar 11 21:19:16 kubernetes-upgrade-171195 kubelet[3456]: E0311 21:19:16.802395    3456 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.241:8443: connect: connection refused
	Mar 11 21:19:17 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:17.293034    3456 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-171195"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.685988    3456 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-171195"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.686589    3456 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-171195"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.689201    3456 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.690660    3456 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.736800    3456 apiserver.go:52] "Watching apiserver"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.742821    3456 topology_manager.go:215] "Topology Admit Handler" podUID="a8e32a95-4ea1-439a-9f19-31f905d595f3" podNamespace="kube-system" podName="storage-provisioner"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.743895    3456 topology_manager.go:215] "Topology Admit Handler" podUID="c393371c-7907-4438-9010-f0c291348e0a" podNamespace="kube-system" podName="coredns-76f75df574-mfzpc"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.744199    3456 topology_manager.go:215] "Topology Admit Handler" podUID="0559e2d7-cd0d-4342-8fd8-9957a2b64a83" podNamespace="kube-system" podName="kube-proxy-jqt67"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.744386    3456 topology_manager.go:215] "Topology Admit Handler" podUID="4fc64d72-cefb-455d-8082-36b07f597d3c" podNamespace="kube-system" podName="coredns-76f75df574-h742x"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.774238    3456 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.831372    3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0559e2d7-cd0d-4342-8fd8-9957a2b64a83-xtables-lock\") pod \"kube-proxy-jqt67\" (UID: \"0559e2d7-cd0d-4342-8fd8-9957a2b64a83\") " pod="kube-system/kube-proxy-jqt67"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.831436    3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0559e2d7-cd0d-4342-8fd8-9957a2b64a83-lib-modules\") pod \"kube-proxy-jqt67\" (UID: \"0559e2d7-cd0d-4342-8fd8-9957a2b64a83\") " pod="kube-system/kube-proxy-jqt67"
	Mar 11 21:19:20 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:20.831567    3456 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a8e32a95-4ea1-439a-9f19-31f905d595f3-tmp\") pod \"storage-provisioner\" (UID: \"a8e32a95-4ea1-439a-9f19-31f905d595f3\") " pod="kube-system/storage-provisioner"
	Mar 11 21:19:21 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:21.043818    3456 scope.go:117] "RemoveContainer" containerID="454c026959fea19f32b810d913682433120e2dfd7b19bff1988a7eacb04a09c7"
	Mar 11 21:19:21 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:21.045012    3456 scope.go:117] "RemoveContainer" containerID="278ddc87f9a076fbb1679cdc3337faa9853955ccc6836e66b065c6bb1365dc82"
	Mar 11 21:19:21 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:21.045557    3456 scope.go:117] "RemoveContainer" containerID="d182a0315037aa7626d69a740d9576011f0f36ae9d2c42ea12e1617f05c20396"
	Mar 11 21:19:21 kubernetes-upgrade-171195 kubelet[3456]: I0311 21:19:21.048027    3456 scope.go:117] "RemoveContainer" containerID="175d4073dbd7b5d37bae2d67ee7f4b633c09d87b337a8fcdc9acdf7c4686b9d0"
	
	
	==> storage-provisioner [454c026959fea19f32b810d913682433120e2dfd7b19bff1988a7eacb04a09c7] <==
	
	
	==> storage-provisioner [8267a95756428f55a07338891777778c90f82fbbaf7b0ef068bfd43aa40a14df] <==
	I0311 21:19:21.507783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 21:19:21.553154       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 21:19:21.553336       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 21:19:21.597360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 21:19:21.601570       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-171195_42340d91-c1b7-4718-a124-37b4c12067d2!
	I0311 21:19:21.607916       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c729bef-d3eb-4f9c-84af-263a511075ca", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-171195_42340d91-c1b7-4718-a124-37b4c12067d2 became leader
	I0311 21:19:21.706667       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-171195_42340d91-c1b7-4718-a124-37b4c12067d2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-171195 -n kubernetes-upgrade-171195
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-171195 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-171195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-171195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-171195: (1.146777809s)
--- FAIL: TestKubernetesUpgrade (384.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-717098 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-717098 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.76076994s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-717098] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-717098" primary control-plane node in "pause-717098" cluster
	* Updating the running kvm2 "pause-717098" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-717098" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 21:18:42.761567   54538 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:18:42.761662   54538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:18:42.761675   54538 out.go:304] Setting ErrFile to fd 2...
	I0311 21:18:42.761682   54538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:18:42.761870   54538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:18:42.762386   54538 out.go:298] Setting JSON to false
	I0311 21:18:42.763491   54538 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7272,"bootTime":1710184651,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:18:42.763552   54538 start.go:139] virtualization: kvm guest
	I0311 21:18:42.766125   54538 out.go:177] * [pause-717098] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:18:42.768025   54538 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:18:42.767909   54538 notify.go:220] Checking for updates...
	I0311 21:18:42.769524   54538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:18:42.770875   54538 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:18:42.772184   54538 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:18:42.773575   54538 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:18:42.774985   54538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:18:42.776764   54538 config.go:182] Loaded profile config "pause-717098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:18:42.777171   54538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:18:42.777210   54538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:18:42.792296   54538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42267
	I0311 21:18:42.792687   54538 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:18:42.793223   54538 main.go:141] libmachine: Using API Version  1
	I0311 21:18:42.793245   54538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:18:42.793675   54538 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:18:42.793914   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:18:42.794229   54538 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:18:42.794578   54538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:18:42.794618   54538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:18:42.810119   54538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0311 21:18:42.810536   54538 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:18:42.810996   54538 main.go:141] libmachine: Using API Version  1
	I0311 21:18:42.811020   54538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:18:42.811375   54538 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:18:42.811560   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:18:42.848829   54538 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:18:42.850066   54538 start.go:297] selected driver: kvm2
	I0311 21:18:42.850088   54538 start.go:901] validating driver "kvm2" against &{Name:pause-717098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.4 ClusterName:pause-717098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:18:42.850270   54538 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:18:42.850698   54538 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:18:42.850771   54538 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:18:42.865145   54538 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:18:42.866088   54538 cni.go:84] Creating CNI manager for ""
	I0311 21:18:42.866104   54538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:18:42.866161   54538 start.go:340] cluster config:
	{Name:pause-717098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-717098 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:18:42.866279   54538 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:18:42.868020   54538 out.go:177] * Starting "pause-717098" primary control-plane node in "pause-717098" cluster
	I0311 21:18:42.869311   54538 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:18:42.869356   54538 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 21:18:42.869365   54538 cache.go:56] Caching tarball of preloaded images
	I0311 21:18:42.869442   54538 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:18:42.869451   54538 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 21:18:42.869566   54538 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098/config.json ...
	I0311 21:18:42.869745   54538 start.go:360] acquireMachinesLock for pause-717098: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:18:52.697973   54538 start.go:364] duration metric: took 9.828191226s to acquireMachinesLock for "pause-717098"
	I0311 21:18:52.698018   54538 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:18:52.698026   54538 fix.go:54] fixHost starting: 
	I0311 21:18:52.698405   54538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:18:52.698445   54538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:18:52.717499   54538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I0311 21:18:52.717995   54538 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:18:52.718495   54538 main.go:141] libmachine: Using API Version  1
	I0311 21:18:52.718517   54538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:18:52.718884   54538 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:18:52.719074   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:18:52.719240   54538 main.go:141] libmachine: (pause-717098) Calling .GetState
	I0311 21:18:52.720772   54538 fix.go:112] recreateIfNeeded on pause-717098: state=Running err=<nil>
	W0311 21:18:52.720793   54538 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:18:52.723157   54538 out.go:177] * Updating the running kvm2 "pause-717098" VM ...
	I0311 21:18:52.724489   54538 machine.go:94] provisionDockerMachine start ...
	I0311 21:18:52.724515   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:18:52.724813   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:18:52.727604   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:52.728094   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:18:52.728125   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:52.728241   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:18:52.728405   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:52.728569   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:52.728701   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:18:52.728886   54538 main.go:141] libmachine: Using SSH client type: native
	I0311 21:18:52.729077   54538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0311 21:18:52.729092   54538 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:18:52.841852   54538 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-717098
	
	I0311 21:18:52.841886   54538 main.go:141] libmachine: (pause-717098) Calling .GetMachineName
	I0311 21:18:52.842258   54538 buildroot.go:166] provisioning hostname "pause-717098"
	I0311 21:18:52.842285   54538 main.go:141] libmachine: (pause-717098) Calling .GetMachineName
	I0311 21:18:52.842481   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:18:52.845264   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:52.845673   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:18:52.845702   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:52.845837   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:18:52.846048   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:52.846216   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:52.846368   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:18:52.846534   54538 main.go:141] libmachine: Using SSH client type: native
	I0311 21:18:52.846709   54538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0311 21:18:52.846724   54538 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-717098 && echo "pause-717098" | sudo tee /etc/hostname
	I0311 21:18:52.973230   54538 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-717098
	
	I0311 21:18:52.973258   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:18:52.975869   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:52.976214   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:18:52.976241   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:52.976438   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:18:52.976637   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:52.976790   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:52.976911   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:18:52.977069   54538 main.go:141] libmachine: Using SSH client type: native
	I0311 21:18:52.977287   54538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0311 21:18:52.977312   54538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-717098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-717098/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-717098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:18:53.088821   54538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:18:53.088852   54538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:18:53.088888   54538 buildroot.go:174] setting up certificates
	I0311 21:18:53.088897   54538 provision.go:84] configureAuth start
	I0311 21:18:53.088913   54538 main.go:141] libmachine: (pause-717098) Calling .GetMachineName
	I0311 21:18:53.089159   54538 main.go:141] libmachine: (pause-717098) Calling .GetIP
	I0311 21:18:53.091978   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:53.092323   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:18:53.092352   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:53.092557   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:18:53.095014   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:53.095447   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:18:53.095478   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:53.095628   54538 provision.go:143] copyHostCerts
	I0311 21:18:53.095684   54538 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:18:53.095694   54538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:18:53.095744   54538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:18:53.095823   54538 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:18:53.095831   54538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:18:53.095848   54538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:18:53.095899   54538 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:18:53.095906   54538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:18:53.095933   54538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:18:53.095991   54538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.pause-717098 san=[127.0.0.1 192.168.50.163 localhost minikube pause-717098]
	I0311 21:18:53.251029   54538 provision.go:177] copyRemoteCerts
	I0311 21:18:53.251107   54538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:18:53.251134   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:18:53.254577   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:53.255098   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:18:53.255143   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:53.255272   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:18:53.255454   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:53.255630   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:18:53.255784   54538 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/pause-717098/id_rsa Username:docker}
	I0311 21:18:53.354042   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:18:53.391831   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:18:53.429047   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0311 21:18:53.466481   54538 provision.go:87] duration metric: took 377.568568ms to configureAuth
	I0311 21:18:53.466504   54538 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:18:53.466744   54538 config.go:182] Loaded profile config "pause-717098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:18:53.466831   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:18:53.470040   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:53.470591   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:18:53.470627   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:53.470858   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:18:53.471051   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:53.471206   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:53.471350   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:18:53.471520   54538 main.go:141] libmachine: Using SSH client type: native
	I0311 21:18:53.471713   54538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0311 21:18:53.471731   54538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:18:59.882741   54538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:18:59.882766   54538 machine.go:97] duration metric: took 7.158260153s to provisionDockerMachine
	I0311 21:18:59.882780   54538 start.go:293] postStartSetup for "pause-717098" (driver="kvm2")
	I0311 21:18:59.882792   54538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:18:59.882811   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:18:59.883166   54538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:18:59.883202   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:18:59.886044   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:59.886414   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:18:59.886438   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:18:59.886583   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:18:59.886749   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:18:59.886901   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:18:59.887052   54538 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/pause-717098/id_rsa Username:docker}
	I0311 21:18:59.970693   54538 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:18:59.976870   54538 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:18:59.976896   54538 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:18:59.976955   54538 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:18:59.977041   54538 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:18:59.977149   54538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:18:59.989352   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:19:00.019736   54538 start.go:296] duration metric: took 136.941669ms for postStartSetup
	I0311 21:19:00.019777   54538 fix.go:56] duration metric: took 7.321750503s for fixHost
	I0311 21:19:00.019795   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:19:00.022544   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:00.022991   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:19:00.023018   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:00.023241   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:19:00.023460   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:19:00.023654   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:19:00.023789   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:19:00.024021   54538 main.go:141] libmachine: Using SSH client type: native
	I0311 21:19:00.024187   54538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.163 22 <nil> <nil>}
	I0311 21:19:00.024199   54538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0311 21:19:00.134517   54538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710191940.131115036
	
	I0311 21:19:00.134552   54538 fix.go:216] guest clock: 1710191940.131115036
	I0311 21:19:00.134564   54538 fix.go:229] Guest: 2024-03-11 21:19:00.131115036 +0000 UTC Remote: 2024-03-11 21:19:00.019781369 +0000 UTC m=+17.306891895 (delta=111.333667ms)
	I0311 21:19:00.134612   54538 fix.go:200] guest clock delta is within tolerance: 111.333667ms
	I0311 21:19:00.134618   54538 start.go:83] releasing machines lock for "pause-717098", held for 7.436620045s
	I0311 21:19:00.134649   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:19:00.134958   54538 main.go:141] libmachine: (pause-717098) Calling .GetIP
	I0311 21:19:00.138137   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:00.138549   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:19:00.138574   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:00.138830   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:19:00.139552   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:19:00.139756   54538 main.go:141] libmachine: (pause-717098) Calling .DriverName
	I0311 21:19:00.139842   54538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:19:00.139884   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:19:00.139939   54538 ssh_runner.go:195] Run: cat /version.json
	I0311 21:19:00.139964   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHHostname
	I0311 21:19:00.142795   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:00.143329   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:19:00.143374   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:00.143896   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:00.143996   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:19:00.143995   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:19:00.144154   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:00.144333   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:19:00.144393   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHPort
	I0311 21:19:00.144551   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHKeyPath
	I0311 21:19:00.144630   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:19:00.144686   54538 main.go:141] libmachine: (pause-717098) Calling .GetSSHUsername
	I0311 21:19:00.144917   54538 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/pause-717098/id_rsa Username:docker}
	I0311 21:19:00.145223   54538 sshutil.go:53] new ssh client: &{IP:192.168.50.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/pause-717098/id_rsa Username:docker}
	I0311 21:19:00.245671   54538 ssh_runner.go:195] Run: systemctl --version
	I0311 21:19:00.252839   54538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:19:00.426495   54538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:19:00.435581   54538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:19:00.435648   54538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:19:00.450919   54538 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0311 21:19:00.450942   54538 start.go:494] detecting cgroup driver to use...
	I0311 21:19:00.451008   54538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:19:00.477548   54538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:19:00.501218   54538 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:19:00.501285   54538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:19:00.523058   54538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:19:00.541505   54538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:19:00.766675   54538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:19:01.043884   54538 docker.go:233] disabling docker service ...
	I0311 21:19:01.043960   54538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:19:01.118545   54538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:19:01.157350   54538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:19:01.468802   54538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:19:01.772442   54538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:19:01.800340   54538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:19:01.825092   54538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:19:01.825146   54538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:19:01.844278   54538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:19:01.844328   54538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:19:01.957585   54538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:19:02.076752   54538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:19:02.131373   54538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:19:02.146059   54538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:19:02.161871   54538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:19:02.190855   54538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:19:02.459831   54538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:19:13.056685   54538 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.596814006s)
	I0311 21:19:13.056720   54538 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:19:13.056783   54538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:19:13.064248   54538 start.go:562] Will wait 60s for crictl version
	I0311 21:19:13.064309   54538 ssh_runner.go:195] Run: which crictl
	I0311 21:19:13.070400   54538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:19:13.145902   54538 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:19:13.146007   54538 ssh_runner.go:195] Run: crio --version
	I0311 21:19:13.196500   54538 ssh_runner.go:195] Run: crio --version
	I0311 21:19:13.244754   54538 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:19:13.246179   54538 main.go:141] libmachine: (pause-717098) Calling .GetIP
	I0311 21:19:13.249527   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:13.249941   54538 main.go:141] libmachine: (pause-717098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:2a:ed", ip: ""} in network mk-pause-717098: {Iface:virbr4 ExpiryTime:2024-03-11 22:17:54 +0000 UTC Type:0 Mac:52:54:00:80:2a:ed Iaid: IPaddr:192.168.50.163 Prefix:24 Hostname:pause-717098 Clientid:01:52:54:00:80:2a:ed}
	I0311 21:19:13.249969   54538 main.go:141] libmachine: (pause-717098) DBG | domain pause-717098 has defined IP address 192.168.50.163 and MAC address 52:54:00:80:2a:ed in network mk-pause-717098
	I0311 21:19:13.250169   54538 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0311 21:19:13.256821   54538 kubeadm.go:877] updating cluster {Name:pause-717098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:pause-717098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:19:13.257000   54538 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:19:13.257054   54538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:19:13.315989   54538 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:19:13.316017   54538 crio.go:415] Images already preloaded, skipping extraction
	I0311 21:19:13.316077   54538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:19:13.370473   54538 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:19:13.370498   54538 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:19:13.370508   54538 kubeadm.go:928] updating node { 192.168.50.163 8443 v1.28.4 crio true true} ...
	I0311 21:19:13.370632   54538 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-717098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-717098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:19:13.370706   54538 ssh_runner.go:195] Run: crio config
	I0311 21:19:13.456537   54538 cni.go:84] Creating CNI manager for ""
	I0311 21:19:13.456566   54538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:19:13.456582   54538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:19:13.456610   54538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.163 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-717098 NodeName:pause-717098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:19:13.456830   54538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-717098"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:19:13.456902   54538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:19:13.472662   54538 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:19:13.472723   54538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:19:13.487983   54538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0311 21:19:13.509850   54538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:19:13.534178   54538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0311 21:19:13.557750   54538 ssh_runner.go:195] Run: grep 192.168.50.163	control-plane.minikube.internal$ /etc/hosts
	I0311 21:19:13.562623   54538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:19:13.730252   54538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:19:13.769560   54538 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098 for IP: 192.168.50.163
	I0311 21:19:13.769591   54538 certs.go:194] generating shared ca certs ...
	I0311 21:19:13.769635   54538 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:13.769875   54538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:19:13.769985   54538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:19:13.770025   54538 certs.go:256] generating profile certs ...
	I0311 21:19:13.770165   54538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098/client.key
	I0311 21:19:13.770349   54538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098/apiserver.key.bb7569ee
	I0311 21:19:13.770407   54538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098/proxy-client.key
	I0311 21:19:13.770548   54538 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:19:13.770585   54538 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:19:13.770598   54538 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:19:13.770658   54538 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:19:13.770695   54538 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:19:13.770741   54538 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:19:13.770810   54538 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:19:13.771666   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:19:13.925820   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:19:14.114498   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:19:14.410575   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:19:14.516498   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0311 21:19:14.562229   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:19:14.605972   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:19:14.644467   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/pause-717098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 21:19:14.678259   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:19:14.710825   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:19:14.744696   54538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:19:14.774084   54538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:19:14.846423   54538 ssh_runner.go:195] Run: openssl version
	I0311 21:19:14.854621   54538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:19:14.869323   54538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:19:14.876367   54538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:19:14.876417   54538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:19:14.883164   54538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:19:14.895684   54538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:19:14.962185   54538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:19:14.969537   54538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:19:14.969600   54538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:19:14.976121   54538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:19:14.993332   54538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:19:15.011353   54538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:19:15.019956   54538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:19:15.020002   54538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:19:15.027868   54538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:19:15.041322   54538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:19:15.046627   54538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:19:15.052863   54538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:19:15.058968   54538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:19:15.065337   54538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:19:15.071342   54538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:19:15.077677   54538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:19:15.085614   54538 kubeadm.go:391] StartCluster: {Name:pause-717098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:pause-717098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:19:15.085722   54538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:19:15.085775   54538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:19:15.139587   54538 cri.go:89] found id: "a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3"
	I0311 21:19:15.139608   54538 cri.go:89] found id: "57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195"
	I0311 21:19:15.139614   54538 cri.go:89] found id: "173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16"
	I0311 21:19:15.139620   54538 cri.go:89] found id: "bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48"
	I0311 21:19:15.139624   54538 cri.go:89] found id: "36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80"
	I0311 21:19:15.139648   54538 cri.go:89] found id: "03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83"
	I0311 21:19:15.139656   54538 cri.go:89] found id: ""
	I0311 21:19:15.139701   54538 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-717098 -n pause-717098
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-717098 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-717098 logs -n 25: (1.404582085s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | stopped-upgrade-890519 stop           | minikube                  | jenkins | v1.26.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
	| start   | -p stopped-upgrade-890519             | stopped-upgrade-890519    | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-169709             | running-upgrade-169709    | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:17 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-890519             | stopped-upgrade-890519    | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
	| start   | -p cert-options-406431                | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-169709             | running-upgrade-169709    | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p force-systemd-env-922319           | force-systemd-env-922319  | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-406431 ssh               | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-406431 -- sudo        | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-406431                | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p pause-717098 --memory=2048         | pause-717098              | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-922319           | force-systemd-env-922319  | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:18 UTC |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:19 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-717098                       | pause-717098              | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:19 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:19 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-228186             | cert-expiration-228186    | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC | 11 Mar 24 21:19 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC | 11 Mar 24 21:19 UTC |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC |                     |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC | 11 Mar 24 21:19 UTC |
	| start   | -p auto-427678 --memory=3072          | auto-427678               | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:19:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:19:28.591127   55482 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:19:28.591355   55482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:19:28.591363   55482 out.go:304] Setting ErrFile to fd 2...
	I0311 21:19:28.591367   55482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:19:28.591522   55482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:19:28.592066   55482 out.go:298] Setting JSON to false
	I0311 21:19:28.593142   55482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7318,"bootTime":1710184651,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:19:28.593211   55482 start.go:139] virtualization: kvm guest
	I0311 21:19:28.595603   55482 out.go:177] * [auto-427678] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:19:28.597330   55482 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:19:28.598556   55482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:19:28.597348   55482 notify.go:220] Checking for updates...
	I0311 21:19:28.601052   55482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:19:28.602383   55482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:19:28.603672   55482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:19:28.604997   55482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:19:28.606671   55482 config.go:182] Loaded profile config "NoKubernetes-364658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0311 21:19:28.606783   55482 config.go:182] Loaded profile config "cert-expiration-228186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:19:28.606963   55482 config.go:182] Loaded profile config "pause-717098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:19:28.607068   55482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:19:28.641565   55482 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 21:19:28.642909   55482 start.go:297] selected driver: kvm2
	I0311 21:19:28.642923   55482 start.go:901] validating driver "kvm2" against <nil>
	I0311 21:19:28.642934   55482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:19:28.643802   55482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:19:28.643901   55482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:19:28.659164   55482 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:19:28.659208   55482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 21:19:28.659420   55482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:19:28.659447   55482 cni.go:84] Creating CNI manager for ""
	I0311 21:19:28.659454   55482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:19:28.659467   55482 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 21:19:28.659522   55482 start.go:340] cluster config:
	{Name:auto-427678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-427678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:19:28.659619   55482 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:19:28.661388   55482 out.go:177] * Starting "auto-427678" primary control-plane node in "auto-427678" cluster
	I0311 21:19:27.491307   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:27.491821   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:27.491836   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:27.491787   55166 retry.go:31] will retry after 1.100375764s: waiting for machine to come up
	I0311 21:19:28.593916   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:28.594524   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:28.594546   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:28.594493   55166 retry.go:31] will retry after 1.297605075s: waiting for machine to come up
	I0311 21:19:29.893899   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:29.894418   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:29.894438   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:29.894365   55166 retry.go:31] will retry after 1.207673054s: waiting for machine to come up
	I0311 21:19:31.104140   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:31.104648   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:31.104658   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:31.104601   55166 retry.go:31] will retry after 1.459882908s: waiting for machine to come up
	I0311 21:19:28.205255   54538 pod_ready.go:102] pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:19:30.194899   54538 pod_ready.go:92] pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:30.194926   54538 pod_ready.go:81] duration metric: took 6.008696213s for pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:30.194937   54538 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:30.199950   54538 pod_ready.go:92] pod "etcd-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:30.199974   54538 pod_ready.go:81] duration metric: took 5.028018ms for pod "etcd-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:30.199986   54538 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:32.208199   54538 pod_ready.go:102] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"False"
	I0311 21:19:28.662747   55482 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:19:28.662806   55482 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 21:19:28.662817   55482 cache.go:56] Caching tarball of preloaded images
	I0311 21:19:28.662892   55482 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:19:28.662906   55482 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 21:19:28.663018   55482 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/config.json ...
	I0311 21:19:28.663040   55482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/config.json: {Name:mk2b4142a1d074325aa3354d6f08868465d3665a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:28.663191   55482 start.go:360] acquireMachinesLock for auto-427678: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:19:32.566066   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:32.566692   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:32.566713   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:32.566619   55166 retry.go:31] will retry after 2.087814321s: waiting for machine to come up
	I0311 21:19:34.656261   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:34.656684   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:34.656706   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:34.656613   55166 retry.go:31] will retry after 3.295172264s: waiting for machine to come up
	I0311 21:19:34.209886   54538 pod_ready.go:102] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"False"
	I0311 21:19:36.707286   54538 pod_ready.go:102] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"False"
	I0311 21:19:37.206600   54538 pod_ready.go:92] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.206621   54538 pod_ready.go:81] duration metric: took 7.006627752s for pod "kube-apiserver-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.206629   54538 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.211903   54538 pod_ready.go:92] pod "kube-controller-manager-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.211923   54538 pod_ready.go:81] duration metric: took 5.286679ms for pod "kube-controller-manager-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.211933   54538 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4xhj5" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.216522   54538 pod_ready.go:92] pod "kube-proxy-4xhj5" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.216539   54538 pod_ready.go:81] duration metric: took 4.600082ms for pod "kube-proxy-4xhj5" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.216546   54538 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.221411   54538 pod_ready.go:92] pod "kube-scheduler-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.221429   54538 pod_ready.go:81] duration metric: took 4.87784ms for pod "kube-scheduler-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.221436   54538 pod_ready.go:38] duration metric: took 13.042973031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:19:37.221450   54538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:19:37.235546   54538 ops.go:34] apiserver oom_adj: -16
	I0311 21:19:37.235561   54538 kubeadm.go:591] duration metric: took 22.020075759s to restartPrimaryControlPlane
	I0311 21:19:37.235567   54538 kubeadm.go:393] duration metric: took 22.149959022s to StartCluster
	I0311 21:19:37.235578   54538 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:37.235629   54538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:19:37.236841   54538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:37.237118   54538 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:19:37.238844   54538 out.go:177] * Verifying Kubernetes components...
	I0311 21:19:37.237205   54538 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:19:37.237334   54538 config.go:182] Loaded profile config "pause-717098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:19:37.240326   54538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:19:37.242807   54538 out.go:177] * Enabled addons: 
	I0311 21:19:37.244154   54538 addons.go:505] duration metric: took 6.952422ms for enable addons: enabled=[]
	I0311 21:19:37.404800   54538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:19:37.421405   54538 node_ready.go:35] waiting up to 6m0s for node "pause-717098" to be "Ready" ...
	I0311 21:19:37.425720   54538 node_ready.go:49] node "pause-717098" has status "Ready":"True"
	I0311 21:19:37.425739   54538 node_ready.go:38] duration metric: took 4.303101ms for node "pause-717098" to be "Ready" ...
	I0311 21:19:37.425746   54538 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:19:37.431477   54538 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.604105   54538 pod_ready.go:92] pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.604139   54538 pod_ready.go:81] duration metric: took 172.634498ms for pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.604153   54538 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.004714   54538 pod_ready.go:92] pod "etcd-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:38.004747   54538 pod_ready.go:81] duration metric: took 400.57529ms for pod "etcd-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.004761   54538 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.403988   54538 pod_ready.go:92] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:38.404010   54538 pod_ready.go:81] duration metric: took 399.24246ms for pod "kube-apiserver-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.404019   54538 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.806934   54538 pod_ready.go:92] pod "kube-controller-manager-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:38.806958   54538 pod_ready.go:81] duration metric: took 402.932492ms for pod "kube-controller-manager-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.806967   54538 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4xhj5" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:39.204219   54538 pod_ready.go:92] pod "kube-proxy-4xhj5" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:39.204254   54538 pod_ready.go:81] duration metric: took 397.27898ms for pod "kube-proxy-4xhj5" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:39.204266   54538 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:39.604539   54538 pod_ready.go:92] pod "kube-scheduler-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:39.604573   54538 pod_ready.go:81] duration metric: took 400.29019ms for pod "kube-scheduler-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:39.604581   54538 pod_ready.go:38] duration metric: took 2.17882522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:19:39.604598   54538 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:19:39.604655   54538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:19:39.620080   54538 api_server.go:72] duration metric: took 2.382927559s to wait for apiserver process to appear ...
	I0311 21:19:39.620102   54538 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:19:39.620116   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:39.626788   54538 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I0311 21:19:39.627840   54538 api_server.go:141] control plane version: v1.28.4
	I0311 21:19:39.627863   54538 api_server.go:131] duration metric: took 7.754395ms to wait for apiserver health ...
	I0311 21:19:39.627873   54538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:19:39.806190   54538 system_pods.go:59] 6 kube-system pods found
	I0311 21:19:39.806217   54538 system_pods.go:61] "coredns-5dd5756b68-qrgqd" [74a57a71-c96e-42cb-83ef-45863ae77f5d] Running
	I0311 21:19:39.806222   54538 system_pods.go:61] "etcd-pause-717098" [afa2a24e-5207-48f5-b6f9-776d9d530904] Running
	I0311 21:19:39.806225   54538 system_pods.go:61] "kube-apiserver-pause-717098" [f94c4f45-861b-465f-8e3b-33c9375b404b] Running
	I0311 21:19:39.806228   54538 system_pods.go:61] "kube-controller-manager-pause-717098" [f65c88bd-54bd-49a9-874b-ed670e14b3da] Running
	I0311 21:19:39.806231   54538 system_pods.go:61] "kube-proxy-4xhj5" [d5c12c7e-fe54-493b-b844-75d7f9c4a002] Running
	I0311 21:19:39.806234   54538 system_pods.go:61] "kube-scheduler-pause-717098" [e900995d-c288-4bf3-93ba-9dbdca63b07b] Running
	I0311 21:19:39.806239   54538 system_pods.go:74] duration metric: took 178.359586ms to wait for pod list to return data ...
	I0311 21:19:39.806246   54538 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:19:40.003616   54538 default_sa.go:45] found service account: "default"
	I0311 21:19:40.003649   54538 default_sa.go:55] duration metric: took 197.397571ms for default service account to be created ...
	I0311 21:19:40.003666   54538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:19:40.208179   54538 system_pods.go:86] 6 kube-system pods found
	I0311 21:19:40.208207   54538 system_pods.go:89] "coredns-5dd5756b68-qrgqd" [74a57a71-c96e-42cb-83ef-45863ae77f5d] Running
	I0311 21:19:40.208214   54538 system_pods.go:89] "etcd-pause-717098" [afa2a24e-5207-48f5-b6f9-776d9d530904] Running
	I0311 21:19:40.208220   54538 system_pods.go:89] "kube-apiserver-pause-717098" [f94c4f45-861b-465f-8e3b-33c9375b404b] Running
	I0311 21:19:40.208230   54538 system_pods.go:89] "kube-controller-manager-pause-717098" [f65c88bd-54bd-49a9-874b-ed670e14b3da] Running
	I0311 21:19:40.208236   54538 system_pods.go:89] "kube-proxy-4xhj5" [d5c12c7e-fe54-493b-b844-75d7f9c4a002] Running
	I0311 21:19:40.208243   54538 system_pods.go:89] "kube-scheduler-pause-717098" [e900995d-c288-4bf3-93ba-9dbdca63b07b] Running
	I0311 21:19:40.208250   54538 system_pods.go:126] duration metric: took 204.577058ms to wait for k8s-apps to be running ...
	I0311 21:19:40.208259   54538 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:19:40.208309   54538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:19:40.225850   54538 system_svc.go:56] duration metric: took 17.582538ms WaitForService to wait for kubelet
	I0311 21:19:40.225877   54538 kubeadm.go:576] duration metric: took 2.988726949s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:19:40.225897   54538 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:19:40.404202   54538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:19:40.404225   54538 node_conditions.go:123] node cpu capacity is 2
	I0311 21:19:40.404235   54538 node_conditions.go:105] duration metric: took 178.332981ms to run NodePressure ...
	I0311 21:19:40.404246   54538 start.go:240] waiting for startup goroutines ...
	I0311 21:19:40.404252   54538 start.go:245] waiting for cluster config update ...
	I0311 21:19:40.404259   54538 start.go:254] writing updated cluster config ...
	I0311 21:19:40.404512   54538 ssh_runner.go:195] Run: rm -f paused
	I0311 21:19:40.452586   54538 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:19:40.454746   54538 out.go:177] * Done! kubectl is now configured to use "pause-717098" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.190949929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=099d3125-e202-4408-9ea0-18d5e8738c95 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.192782016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1179ab35-a474-45b9-bd54-e52e40da788b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.193438697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191981193327780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1179ab35-a474-45b9-bd54-e52e40da788b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.194034565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fe1c463-1924-4ac7-b7f8-6eb10eee2593 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.194117885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fe1c463-1924-4ac7-b7f8-6eb10eee2593 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.194481329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e,PodSandboxId:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710191963484176003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399,PodSandboxId:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710191963458699533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5,PodSandboxId:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710191957744448452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c,PodSandboxId:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710191957719185733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd,PodSandboxId:6ba9e5ab209c6612ae55bbe0591a9c3b56ce30d5c9c3182d6ba74df3f354e066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710191957738854827,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba
4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f,PodSandboxId:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710191957682943422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io
.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3,PodSandboxId:adace3260d57f384e8dcf6888b755b9e76a31aae458f26a922579e95d6323233,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710191941982333886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b
7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48,PodSandboxId:6fd4dc8d2482c7202a998c5f5993c6820e9b564beadfdaa21b9965370aaf8771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710191941843254468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash
: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195,PodSandboxId:d972e6b30611b062ec44b32cac71aa17440e8d1a83e27ccf5f665639d2834716,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710191941890542312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16,PodSandboxId:5cf82986b6ab25f8f1e9a349cdbfe31fd68b8bcfeabfd853dab33eed779e7dea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710191941861844362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80,PodSandboxId:99c55ca2f3976a1914bc7d3cb2f43ce3bfa612be828f3bc6cdde52861c4c5a90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710191941607295082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83,PodSandboxId:14d96b86dcc55a53f0bbba0ae24a2a042912de87a036ec37508bb46e603828f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710191941421813269,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9f22983f5021de46fba4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fe1c463-1924-4ac7-b7f8-6eb10eee2593 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.241162973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68fcd999-6286-4022-a956-f2945ae666f8 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.241248961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68fcd999-6286-4022-a956-f2945ae666f8 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.242627257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e180eb0e-ee9e-4a6e-aa2d-38a08e7d5f25 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.243000283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191981242979861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e180eb0e-ee9e-4a6e-aa2d-38a08e7d5f25 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.243515357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d30fd5d-a4d9-440e-9300-412dc29b250b name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.243807318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d30fd5d-a4d9-440e-9300-412dc29b250b name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.244275943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e,PodSandboxId:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710191963484176003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399,PodSandboxId:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710191963458699533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5,PodSandboxId:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710191957744448452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c,PodSandboxId:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710191957719185733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd,PodSandboxId:6ba9e5ab209c6612ae55bbe0591a9c3b56ce30d5c9c3182d6ba74df3f354e066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710191957738854827,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba
4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f,PodSandboxId:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710191957682943422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io
.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3,PodSandboxId:adace3260d57f384e8dcf6888b755b9e76a31aae458f26a922579e95d6323233,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710191941982333886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b
7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48,PodSandboxId:6fd4dc8d2482c7202a998c5f5993c6820e9b564beadfdaa21b9965370aaf8771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710191941843254468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash
: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195,PodSandboxId:d972e6b30611b062ec44b32cac71aa17440e8d1a83e27ccf5f665639d2834716,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710191941890542312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16,PodSandboxId:5cf82986b6ab25f8f1e9a349cdbfe31fd68b8bcfeabfd853dab33eed779e7dea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710191941861844362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80,PodSandboxId:99c55ca2f3976a1914bc7d3cb2f43ce3bfa612be828f3bc6cdde52861c4c5a90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710191941607295082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83,PodSandboxId:14d96b86dcc55a53f0bbba0ae24a2a042912de87a036ec37508bb46e603828f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710191941421813269,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9f22983f5021de46fba4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d30fd5d-a4d9-440e-9300-412dc29b250b name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.245867725Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfd981d8-357d-4712-b3c5-c7cb455933d8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.246056049Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-qrgqd,Uid:74a57a71-c96e-42cb-83ef-45863ae77f5d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191954058749813,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:18:39.170938136Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&PodSandboxMetadata{Name:kube-proxy-4xhj5,Uid:d5c12c7e-fe54-493b-b844-75d7f9c4a002,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1710191954013527793,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:18:37.807989623Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&PodSandboxMetadata{Name:etcd-pause-717098,Uid:dd223270ff3fac7adedc3f69a104c16f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191954002824875,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.50.163:2379,kubernetes.io/config.hash: dd223270ff3fac7adedc3f69a104c16f,kubernetes.io/config.seen: 2024-03-11T21:18:25.273056205Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-717098,Uid:c0844fe6270bb1cf37846aa5811bb4e7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191954000916778,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c0844fe6270bb1cf37846aa5811bb4e7,kubernetes.io/config.seen: 2024-03-11T21:18:25.273061622Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6ba9e5ab209c6612ae55bbe0591a9c3b5
6ce30d5c9c3182d6ba74df3f354e066,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-717098,Uid:9f22983f5021de46fba4a218f1776f79,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191953964444258,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba4a218f1776f79,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.163:8443,kubernetes.io/config.hash: 9f22983f5021de46fba4a218f1776f79,kubernetes.io/config.seen: 2024-03-11T21:18:25.273060413Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-717098,Uid:e098020ed89d0b71f97088c28b03d960,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191953944625584,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098020ed89d0b71f97088c28b03d960,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e098020ed89d0b71f97088c28b03d960,kubernetes.io/config.seen: 2024-03-11T21:18:25.273062681Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bfd981d8-357d-4712-b3c5-c7cb455933d8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.246618876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a58cf959-370d-4a92-a7a9-1617347e1d8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.246697381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a58cf959-370d-4a92-a7a9-1617347e1d8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.246912991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e,PodSandboxId:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710191963484176003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399,PodSandboxId:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710191963458699533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5,PodSandboxId:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710191957744448452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c,PodSandboxId:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710191957719185733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd,PodSandboxId:6ba9e5ab209c6612ae55bbe0591a9c3b56ce30d5c9c3182d6ba74df3f354e066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710191957738854827,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba
4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f,PodSandboxId:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710191957682943422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io
.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a58cf959-370d-4a92-a7a9-1617347e1d8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.290731096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4320bdc-6281-4979-83c0-b60d20a78382 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.290826810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4320bdc-6281-4979-83c0-b60d20a78382 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.291802230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a063cea-ec6e-4f8e-bbbb-1f0f2943fe77 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.292488208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191981292461630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a063cea-ec6e-4f8e-bbbb-1f0f2943fe77 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.293052706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5298d475-dc9a-489a-8044-16e02c2e034a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.293132637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5298d475-dc9a-489a-8044-16e02c2e034a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:41 pause-717098 crio[2740]: time="2024-03-11 21:19:41.293596647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e,PodSandboxId:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710191963484176003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399,PodSandboxId:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710191963458699533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5,PodSandboxId:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710191957744448452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c,PodSandboxId:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710191957719185733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd,PodSandboxId:6ba9e5ab209c6612ae55bbe0591a9c3b56ce30d5c9c3182d6ba74df3f354e066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710191957738854827,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba
4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f,PodSandboxId:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710191957682943422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io
.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3,PodSandboxId:adace3260d57f384e8dcf6888b755b9e76a31aae458f26a922579e95d6323233,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710191941982333886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b
7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48,PodSandboxId:6fd4dc8d2482c7202a998c5f5993c6820e9b564beadfdaa21b9965370aaf8771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710191941843254468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash
: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195,PodSandboxId:d972e6b30611b062ec44b32cac71aa17440e8d1a83e27ccf5f665639d2834716,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710191941890542312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16,PodSandboxId:5cf82986b6ab25f8f1e9a349cdbfe31fd68b8bcfeabfd853dab33eed779e7dea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710191941861844362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80,PodSandboxId:99c55ca2f3976a1914bc7d3cb2f43ce3bfa612be828f3bc6cdde52861c4c5a90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710191941607295082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83,PodSandboxId:14d96b86dcc55a53f0bbba0ae24a2a042912de87a036ec37508bb46e603828f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710191941421813269,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9f22983f5021de46fba4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5298d475-dc9a-489a-8044-16e02c2e034a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4731318699b20       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   17 seconds ago      Running             kube-proxy                2                   96218f26b2f31       kube-proxy-4xhj5
	6b96f9fa62710       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   17 seconds ago      Running             coredns                   2                   c30d241d48072       coredns-5dd5756b68-qrgqd
	3b910fcb0a6d4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   23 seconds ago      Running             kube-controller-manager   2                   4c9bdc775ff42       kube-controller-manager-pause-717098
	333183c3c51fb       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   23 seconds ago      Running             kube-apiserver            2                   6ba9e5ab209c6       kube-apiserver-pause-717098
	77e1d20c50f14       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   23 seconds ago      Running             kube-scheduler            2                   741eddbee9d0c       kube-scheduler-pause-717098
	9f0fb9fc3c7e0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago      Running             etcd                      2                   aae69d33ed69c       etcd-pause-717098
	a1922e224e43a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   39 seconds ago      Exited              kube-proxy                1                   adace3260d57f       kube-proxy-4xhj5
	57b3e179246e8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   39 seconds ago      Exited              coredns                   1                   d972e6b30611b       coredns-5dd5756b68-qrgqd
	173082cedec1a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   39 seconds ago      Exited              etcd                      1                   5cf82986b6ab2       etcd-pause-717098
	bd0976acdedcf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   39 seconds ago      Exited              kube-controller-manager   1                   6fd4dc8d2482c       kube-controller-manager-pause-717098
	36afd2df43517       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   39 seconds ago      Exited              kube-scheduler            1                   99c55ca2f3976       kube-scheduler-pause-717098
	03578cb6bf10e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   39 seconds ago      Exited              kube-apiserver            1                   14d96b86dcc55       kube-apiserver-pause-717098
	
	
	==> coredns [57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:41199 - 34979 "HINFO IN 7726036319816656465.436920015681860341. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009544172s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53126 - 55337 "HINFO IN 5806598755898245643.9217482599092167676. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016182314s
	
	
	==> describe nodes <==
	Name:               pause-717098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-717098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=pause-717098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T21_18_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:18:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-717098
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:19:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:19:22 +0000   Mon, 11 Mar 2024 21:18:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:19:22 +0000   Mon, 11 Mar 2024 21:18:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:19:22 +0000   Mon, 11 Mar 2024 21:18:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:19:22 +0000   Mon, 11 Mar 2024 21:18:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.163
	  Hostname:    pause-717098
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c55693a10de425cac68f33e1c8480ff
	  System UUID:                1c55693a-10de-425c-ac68-f33e1c8480ff
	  Boot ID:                    bffd17f1-85c8-453a-8485-d94fb780e0bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qrgqd                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 etcd-pause-717098                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kube-apiserver-pause-717098             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-pause-717098    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-4xhj5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-pause-717098             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node pause-717098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node pause-717098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node pause-717098 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     76s                kubelet          Node pause-717098 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node pause-717098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node pause-717098 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                76s                kubelet          Node pause-717098 status is now: NodeReady
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           64s                node-controller  Node pause-717098 event: Registered Node pause-717098 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-717098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-717098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-717098 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-717098 event: Registered Node pause-717098 in Controller
	
	
	==> dmesg <==
	[  +0.074309] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.216743] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.138723] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.293701] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +5.512610] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.063754] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.884096] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.416734] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.854403] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.085710] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.472494] hrtimer: interrupt took 6477603 ns
	[ +11.582670] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +0.108304] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.411524] kauditd_printk_skb: 82 callbacks suppressed
	[ +15.009866] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[Mar11 21:19] systemd-fstab-generator[2263]: Ignoring "noauto" option for root device
	[  +0.362637] systemd-fstab-generator[2385]: Ignoring "noauto" option for root device
	[  +0.333487] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	[  +0.698848] systemd-fstab-generator[2670]: Ignoring "noauto" option for root device
	[ +11.338533] systemd-fstab-generator[2941]: Ignoring "noauto" option for root device
	[  +0.095590] kauditd_printk_skb: 169 callbacks suppressed
	[  +3.142176] systemd-fstab-generator[3335]: Ignoring "noauto" option for root device
	[  +6.687986] kauditd_printk_skb: 105 callbacks suppressed
	[ +11.427250] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.323287] systemd-fstab-generator[3768]: Ignoring "noauto" option for root device
	
	
	==> etcd [173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16] <==
	
	
	==> etcd [9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f] <==
	{"level":"info","ts":"2024-03-11T21:19:18.207864Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:18.207874Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:18.208121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 switched to configuration voters=(541872491336215268)"}
	{"level":"info","ts":"2024-03-11T21:19:18.208208Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c04ffccd875dba59","local-member-id":"7851e28efa6aae4","added-peer-id":"7851e28efa6aae4","added-peer-peer-urls":["https://192.168.50.163:2380"]}
	{"level":"info","ts":"2024-03-11T21:19:18.208299Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c04ffccd875dba59","local-member-id":"7851e28efa6aae4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:19:18.212647Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:19:18.21817Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T21:19:18.243538Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7851e28efa6aae4","initial-advertise-peer-urls":["https://192.168.50.163:2380"],"listen-peer-urls":["https://192.168.50.163:2380"],"advertise-client-urls":["https://192.168.50.163:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.163:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T21:19:18.244571Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T21:19:18.240453Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.163:2380"}
	{"level":"info","ts":"2024-03-11T21:19:18.245899Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.163:2380"}
	{"level":"info","ts":"2024-03-11T21:19:20.086464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:20.086606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:20.08667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 received MsgPreVoteResp from 7851e28efa6aae4 at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:20.086726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 became candidate at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:20.086758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 received MsgVoteResp from 7851e28efa6aae4 at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:20.086793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 became leader at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:20.086826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7851e28efa6aae4 elected leader 7851e28efa6aae4 at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:20.093745Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7851e28efa6aae4","local-member-attributes":"{Name:pause-717098 ClientURLs:[https://192.168.50.163:2379]}","request-path":"/0/members/7851e28efa6aae4/attributes","cluster-id":"c04ffccd875dba59","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T21:19:20.09377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:19:20.094196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T21:19:20.094245Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T21:19:20.094266Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:19:20.095866Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.163:2379"}
	{"level":"info","ts":"2024-03-11T21:19:20.096267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:19:41 up 1 min,  0 users,  load average: 1.23, 0.46, 0.16
	Linux pause-717098 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83] <==
	
	
	==> kube-apiserver [333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd] <==
	I0311 21:19:22.172706       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0311 21:19:22.172735       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0311 21:19:22.172758       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0311 21:19:22.304297       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 21:19:22.304522       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 21:19:22.305691       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0311 21:19:22.305744       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0311 21:19:22.306428       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 21:19:22.318791       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0311 21:19:22.333247       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 21:19:22.333847       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 21:19:22.334339       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 21:19:22.341415       1 aggregator.go:166] initial CRD sync complete...
	I0311 21:19:22.341521       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 21:19:22.341566       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 21:19:22.341579       1 cache.go:39] Caches are synced for autoregister controller
	E0311 21:19:22.363670       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0311 21:19:23.110226       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 21:19:24.009211       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0311 21:19:24.047208       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0311 21:19:24.108561       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0311 21:19:24.145775       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 21:19:24.157616       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0311 21:19:34.958953       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 21:19:35.170667       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5] <==
	I0311 21:19:34.948522       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-717098"
	I0311 21:19:34.948618       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0311 21:19:34.948695       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0311 21:19:34.948795       1 taint_manager.go:210] "Sending events to api server"
	I0311 21:19:34.948994       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0311 21:19:34.949189       1 event.go:307] "Event occurred" object="pause-717098" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-717098 event: Registered Node pause-717098 in Controller"
	I0311 21:19:34.951664       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0311 21:19:34.952006       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0311 21:19:34.955636       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0311 21:19:34.960052       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0311 21:19:34.972744       1 shared_informer.go:318] Caches are synced for daemon sets
	I0311 21:19:34.977235       1 shared_informer.go:318] Caches are synced for namespace
	I0311 21:19:34.981551       1 shared_informer.go:318] Caches are synced for TTL
	I0311 21:19:35.002608       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0311 21:19:35.006032       1 shared_informer.go:318] Caches are synced for deployment
	I0311 21:19:35.013443       1 shared_informer.go:318] Caches are synced for HPA
	I0311 21:19:35.023458       1 shared_informer.go:318] Caches are synced for attach detach
	I0311 21:19:35.052942       1 shared_informer.go:318] Caches are synced for disruption
	I0311 21:19:35.070336       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 21:19:35.099610       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 21:19:35.132608       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0311 21:19:35.156562       1 shared_informer.go:318] Caches are synced for endpoint
	I0311 21:19:35.532566       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 21:19:35.566235       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 21:19:35.566424       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48] <==
	
	
	==> kube-proxy [4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e] <==
	I0311 21:19:23.709549       1 server_others.go:69] "Using iptables proxy"
	I0311 21:19:23.732102       1 node.go:141] Successfully retrieved node IP: 192.168.50.163
	I0311 21:19:23.830090       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 21:19:23.830149       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:19:23.843184       1 server_others.go:152] "Using iptables Proxier"
	I0311 21:19:23.843288       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:19:23.851833       1 server.go:846] "Version info" version="v1.28.4"
	I0311 21:19:23.851888       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:19:23.853122       1 config.go:188] "Starting service config controller"
	I0311 21:19:23.853193       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:19:23.853303       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:19:23.853339       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:19:23.856217       1 config.go:315] "Starting node config controller"
	I0311 21:19:23.856259       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:19:23.954465       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:19:23.955723       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 21:19:23.957649       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3] <==
	
	
	==> kube-scheduler [36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80] <==
	
	
	==> kube-scheduler [77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c] <==
	I0311 21:19:19.071490       1 serving.go:348] Generated self-signed cert in-memory
	W0311 21:19:22.266458       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 21:19:22.266758       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:19:22.267042       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 21:19:22.267191       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 21:19:22.355698       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0311 21:19:22.366477       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:19:22.376489       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 21:19:22.376710       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 21:19:22.390728       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 21:19:22.391113       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 21:19:22.477248       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:19:17 pause-717098 kubelet[3342]: I0311 21:19:17.653562    3342 scope.go:117] "RemoveContainer" containerID="36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80"
	Mar 11 21:19:17 pause-717098 kubelet[3342]: E0311 21:19:17.761538    3342 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-717098?timeout=10s\": dial tcp 192.168.50.163:8443: connect: connection refused" interval="800ms"
	Mar 11 21:19:17 pause-717098 kubelet[3342]: I0311 21:19:17.864787    3342 kubelet_node_status.go:70] "Attempting to register node" node="pause-717098"
	Mar 11 21:19:17 pause-717098 kubelet[3342]: E0311 21:19:17.867843    3342 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.163:8443: connect: connection refused" node="pause-717098"
	Mar 11 21:19:18 pause-717098 kubelet[3342]: W0311 21:19:18.117072    3342 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: E0311 21:19:18.117149    3342 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: E0311 21:19:18.150108    3342 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-717098.17bbd286264356b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-717098", UID:"pause-717098", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pause-717098"}, FirstTimestamp:time.Date(2024, time.March, 11, 21, 19, 17, 126633144, time.Local), LastTimestamp:time.Date(2
024, time.March, 11, 21, 19, 17, 126633144, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-717098"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.50.163:8443: connect: connection refused'(may retry after sleeping)
	Mar 11 21:19:18 pause-717098 kubelet[3342]: W0311 21:19:18.272016    3342 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: E0311 21:19:18.272070    3342 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: W0311 21:19:18.283221    3342 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-717098&limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: E0311 21:19:18.283276    3342 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-717098&limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: I0311 21:19:18.670319    3342 kubelet_node_status.go:70] "Attempting to register node" node="pause-717098"
	Mar 11 21:19:22 pause-717098 kubelet[3342]: I0311 21:19:22.348042    3342 kubelet_node_status.go:108] "Node was previously registered" node="pause-717098"
	Mar 11 21:19:22 pause-717098 kubelet[3342]: I0311 21:19:22.348162    3342 kubelet_node_status.go:73] "Successfully registered node" node="pause-717098"
	Mar 11 21:19:22 pause-717098 kubelet[3342]: I0311 21:19:22.383586    3342 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 11 21:19:22 pause-717098 kubelet[3342]: I0311 21:19:22.389536    3342 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.133624    3342 apiserver.go:52] "Watching apiserver"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.137461    3342 topology_manager.go:215] "Topology Admit Handler" podUID="d5c12c7e-fe54-493b-b844-75d7f9c4a002" podNamespace="kube-system" podName="kube-proxy-4xhj5"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.137613    3342 topology_manager.go:215] "Topology Admit Handler" podUID="74a57a71-c96e-42cb-83ef-45863ae77f5d" podNamespace="kube-system" podName="coredns-5dd5756b68-qrgqd"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.154552    3342 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.175517    3342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5c12c7e-fe54-493b-b844-75d7f9c4a002-lib-modules\") pod \"kube-proxy-4xhj5\" (UID: \"d5c12c7e-fe54-493b-b844-75d7f9c4a002\") " pod="kube-system/kube-proxy-4xhj5"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.175597    3342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5c12c7e-fe54-493b-b844-75d7f9c4a002-xtables-lock\") pod \"kube-proxy-4xhj5\" (UID: \"d5c12c7e-fe54-493b-b844-75d7f9c4a002\") " pod="kube-system/kube-proxy-4xhj5"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.439246    3342 scope.go:117] "RemoveContainer" containerID="57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.440077    3342 scope.go:117] "RemoveContainer" containerID="a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3"
	Mar 11 21:19:29 pause-717098 kubelet[3342]: I0311 21:19:29.728235    3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-717098 -n pause-717098
helpers_test.go:261: (dbg) Run:  kubectl --context pause-717098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-717098 -n pause-717098
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-717098 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-717098 logs -n 25: (1.39910425s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | stopped-upgrade-890519 stop           | minikube                  | jenkins | v1.26.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:15 UTC |
	| start   | -p stopped-upgrade-890519             | stopped-upgrade-890519    | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-169709             | running-upgrade-169709    | jenkins | v1.32.0 | 11 Mar 24 21:15 UTC | 11 Mar 24 21:17 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-890519             | stopped-upgrade-890519    | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:16 UTC |
	| start   | -p cert-options-406431                | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:16 UTC | 11 Mar 24 21:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-169709             | running-upgrade-169709    | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p force-systemd-env-922319           | force-systemd-env-922319  | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-406431 ssh               | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-406431 -- sudo        | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-406431                | cert-options-406431       | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p pause-717098 --memory=2048         | pause-717098              | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:17 UTC |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:17 UTC | 11 Mar 24 21:18 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-922319           | force-systemd-env-922319  | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:18 UTC |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:19 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-717098                       | pause-717098              | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:19 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:18 UTC | 11 Mar 24 21:19 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-228186             | cert-expiration-228186    | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC | 11 Mar 24 21:19 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC | 11 Mar 24 21:19 UTC |
	| start   | -p NoKubernetes-364658                | NoKubernetes-364658       | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC |                     |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-171195          | kubernetes-upgrade-171195 | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC | 11 Mar 24 21:19 UTC |
	| start   | -p auto-427678 --memory=3072          | auto-427678               | jenkins | v1.32.0 | 11 Mar 24 21:19 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:19:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:19:28.591127   55482 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:19:28.591355   55482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:19:28.591363   55482 out.go:304] Setting ErrFile to fd 2...
	I0311 21:19:28.591367   55482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:19:28.591522   55482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:19:28.592066   55482 out.go:298] Setting JSON to false
	I0311 21:19:28.593142   55482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7318,"bootTime":1710184651,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:19:28.593211   55482 start.go:139] virtualization: kvm guest
	I0311 21:19:28.595603   55482 out.go:177] * [auto-427678] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:19:28.597330   55482 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:19:28.598556   55482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:19:28.597348   55482 notify.go:220] Checking for updates...
	I0311 21:19:28.601052   55482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:19:28.602383   55482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:19:28.603672   55482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:19:28.604997   55482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:19:28.606671   55482 config.go:182] Loaded profile config "NoKubernetes-364658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0311 21:19:28.606783   55482 config.go:182] Loaded profile config "cert-expiration-228186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:19:28.606963   55482 config.go:182] Loaded profile config "pause-717098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:19:28.607068   55482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:19:28.641565   55482 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 21:19:28.642909   55482 start.go:297] selected driver: kvm2
	I0311 21:19:28.642923   55482 start.go:901] validating driver "kvm2" against <nil>
	I0311 21:19:28.642934   55482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:19:28.643802   55482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:19:28.643901   55482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:19:28.659164   55482 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:19:28.659208   55482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 21:19:28.659420   55482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:19:28.659447   55482 cni.go:84] Creating CNI manager for ""
	I0311 21:19:28.659454   55482 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:19:28.659467   55482 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 21:19:28.659522   55482 start.go:340] cluster config:
	{Name:auto-427678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-427678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:19:28.659619   55482 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:19:28.661388   55482 out.go:177] * Starting "auto-427678" primary control-plane node in "auto-427678" cluster
	I0311 21:19:27.491307   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:27.491821   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:27.491836   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:27.491787   55166 retry.go:31] will retry after 1.100375764s: waiting for machine to come up
	I0311 21:19:28.593916   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:28.594524   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:28.594546   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:28.594493   55166 retry.go:31] will retry after 1.297605075s: waiting for machine to come up
	I0311 21:19:29.893899   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:29.894418   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:29.894438   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:29.894365   55166 retry.go:31] will retry after 1.207673054s: waiting for machine to come up
	I0311 21:19:31.104140   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:31.104648   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:31.104658   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:31.104601   55166 retry.go:31] will retry after 1.459882908s: waiting for machine to come up
	I0311 21:19:28.205255   54538 pod_ready.go:102] pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:19:30.194899   54538 pod_ready.go:92] pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:30.194926   54538 pod_ready.go:81] duration metric: took 6.008696213s for pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:30.194937   54538 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:30.199950   54538 pod_ready.go:92] pod "etcd-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:30.199974   54538 pod_ready.go:81] duration metric: took 5.028018ms for pod "etcd-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:30.199986   54538 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:32.208199   54538 pod_ready.go:102] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"False"
	I0311 21:19:28.662747   55482 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:19:28.662806   55482 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0311 21:19:28.662817   55482 cache.go:56] Caching tarball of preloaded images
	I0311 21:19:28.662892   55482 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:19:28.662906   55482 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0311 21:19:28.663018   55482 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/config.json ...
	I0311 21:19:28.663040   55482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/config.json: {Name:mk2b4142a1d074325aa3354d6f08868465d3665a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:28.663191   55482 start.go:360] acquireMachinesLock for auto-427678: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:19:32.566066   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:32.566692   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:32.566713   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:32.566619   55166 retry.go:31] will retry after 2.087814321s: waiting for machine to come up
	I0311 21:19:34.656261   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:34.656684   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:34.656706   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:34.656613   55166 retry.go:31] will retry after 3.295172264s: waiting for machine to come up
	I0311 21:19:34.209886   54538 pod_ready.go:102] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"False"
	I0311 21:19:36.707286   54538 pod_ready.go:102] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"False"
	I0311 21:19:37.206600   54538 pod_ready.go:92] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.206621   54538 pod_ready.go:81] duration metric: took 7.006627752s for pod "kube-apiserver-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.206629   54538 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.211903   54538 pod_ready.go:92] pod "kube-controller-manager-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.211923   54538 pod_ready.go:81] duration metric: took 5.286679ms for pod "kube-controller-manager-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.211933   54538 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4xhj5" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.216522   54538 pod_ready.go:92] pod "kube-proxy-4xhj5" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.216539   54538 pod_ready.go:81] duration metric: took 4.600082ms for pod "kube-proxy-4xhj5" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.216546   54538 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.221411   54538 pod_ready.go:92] pod "kube-scheduler-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.221429   54538 pod_ready.go:81] duration metric: took 4.87784ms for pod "kube-scheduler-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.221436   54538 pod_ready.go:38] duration metric: took 13.042973031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:19:37.221450   54538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:19:37.235546   54538 ops.go:34] apiserver oom_adj: -16
	I0311 21:19:37.235561   54538 kubeadm.go:591] duration metric: took 22.020075759s to restartPrimaryControlPlane
	I0311 21:19:37.235567   54538 kubeadm.go:393] duration metric: took 22.149959022s to StartCluster
	I0311 21:19:37.235578   54538 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:37.235629   54538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:19:37.236841   54538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:19:37.237118   54538 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.163 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:19:37.238844   54538 out.go:177] * Verifying Kubernetes components...
	I0311 21:19:37.237205   54538 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:19:37.237334   54538 config.go:182] Loaded profile config "pause-717098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:19:37.240326   54538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:19:37.242807   54538 out.go:177] * Enabled addons: 
	I0311 21:19:37.244154   54538 addons.go:505] duration metric: took 6.952422ms for enable addons: enabled=[]
	I0311 21:19:37.404800   54538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:19:37.421405   54538 node_ready.go:35] waiting up to 6m0s for node "pause-717098" to be "Ready" ...
	I0311 21:19:37.425720   54538 node_ready.go:49] node "pause-717098" has status "Ready":"True"
	I0311 21:19:37.425739   54538 node_ready.go:38] duration metric: took 4.303101ms for node "pause-717098" to be "Ready" ...
	I0311 21:19:37.425746   54538 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:19:37.431477   54538 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.604105   54538 pod_ready.go:92] pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:37.604139   54538 pod_ready.go:81] duration metric: took 172.634498ms for pod "coredns-5dd5756b68-qrgqd" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:37.604153   54538 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.004714   54538 pod_ready.go:92] pod "etcd-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:38.004747   54538 pod_ready.go:81] duration metric: took 400.57529ms for pod "etcd-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.004761   54538 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.403988   54538 pod_ready.go:92] pod "kube-apiserver-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:38.404010   54538 pod_ready.go:81] duration metric: took 399.24246ms for pod "kube-apiserver-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.404019   54538 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.806934   54538 pod_ready.go:92] pod "kube-controller-manager-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:38.806958   54538 pod_ready.go:81] duration metric: took 402.932492ms for pod "kube-controller-manager-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:38.806967   54538 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4xhj5" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:39.204219   54538 pod_ready.go:92] pod "kube-proxy-4xhj5" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:39.204254   54538 pod_ready.go:81] duration metric: took 397.27898ms for pod "kube-proxy-4xhj5" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:39.204266   54538 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:39.604539   54538 pod_ready.go:92] pod "kube-scheduler-pause-717098" in "kube-system" namespace has status "Ready":"True"
	I0311 21:19:39.604573   54538 pod_ready.go:81] duration metric: took 400.29019ms for pod "kube-scheduler-pause-717098" in "kube-system" namespace to be "Ready" ...
	I0311 21:19:39.604581   54538 pod_ready.go:38] duration metric: took 2.17882522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:19:39.604598   54538 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:19:39.604655   54538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:19:39.620080   54538 api_server.go:72] duration metric: took 2.382927559s to wait for apiserver process to appear ...
	I0311 21:19:39.620102   54538 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:19:39.620116   54538 api_server.go:253] Checking apiserver healthz at https://192.168.50.163:8443/healthz ...
	I0311 21:19:39.626788   54538 api_server.go:279] https://192.168.50.163:8443/healthz returned 200:
	ok
	I0311 21:19:39.627840   54538 api_server.go:141] control plane version: v1.28.4
	I0311 21:19:39.627863   54538 api_server.go:131] duration metric: took 7.754395ms to wait for apiserver health ...
	I0311 21:19:39.627873   54538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:19:39.806190   54538 system_pods.go:59] 6 kube-system pods found
	I0311 21:19:39.806217   54538 system_pods.go:61] "coredns-5dd5756b68-qrgqd" [74a57a71-c96e-42cb-83ef-45863ae77f5d] Running
	I0311 21:19:39.806222   54538 system_pods.go:61] "etcd-pause-717098" [afa2a24e-5207-48f5-b6f9-776d9d530904] Running
	I0311 21:19:39.806225   54538 system_pods.go:61] "kube-apiserver-pause-717098" [f94c4f45-861b-465f-8e3b-33c9375b404b] Running
	I0311 21:19:39.806228   54538 system_pods.go:61] "kube-controller-manager-pause-717098" [f65c88bd-54bd-49a9-874b-ed670e14b3da] Running
	I0311 21:19:39.806231   54538 system_pods.go:61] "kube-proxy-4xhj5" [d5c12c7e-fe54-493b-b844-75d7f9c4a002] Running
	I0311 21:19:39.806234   54538 system_pods.go:61] "kube-scheduler-pause-717098" [e900995d-c288-4bf3-93ba-9dbdca63b07b] Running
	I0311 21:19:39.806239   54538 system_pods.go:74] duration metric: took 178.359586ms to wait for pod list to return data ...
	I0311 21:19:39.806246   54538 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:19:40.003616   54538 default_sa.go:45] found service account: "default"
	I0311 21:19:40.003649   54538 default_sa.go:55] duration metric: took 197.397571ms for default service account to be created ...
	I0311 21:19:40.003666   54538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:19:40.208179   54538 system_pods.go:86] 6 kube-system pods found
	I0311 21:19:40.208207   54538 system_pods.go:89] "coredns-5dd5756b68-qrgqd" [74a57a71-c96e-42cb-83ef-45863ae77f5d] Running
	I0311 21:19:40.208214   54538 system_pods.go:89] "etcd-pause-717098" [afa2a24e-5207-48f5-b6f9-776d9d530904] Running
	I0311 21:19:40.208220   54538 system_pods.go:89] "kube-apiserver-pause-717098" [f94c4f45-861b-465f-8e3b-33c9375b404b] Running
	I0311 21:19:40.208230   54538 system_pods.go:89] "kube-controller-manager-pause-717098" [f65c88bd-54bd-49a9-874b-ed670e14b3da] Running
	I0311 21:19:40.208236   54538 system_pods.go:89] "kube-proxy-4xhj5" [d5c12c7e-fe54-493b-b844-75d7f9c4a002] Running
	I0311 21:19:40.208243   54538 system_pods.go:89] "kube-scheduler-pause-717098" [e900995d-c288-4bf3-93ba-9dbdca63b07b] Running
	I0311 21:19:40.208250   54538 system_pods.go:126] duration metric: took 204.577058ms to wait for k8s-apps to be running ...
	I0311 21:19:40.208259   54538 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:19:40.208309   54538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:19:40.225850   54538 system_svc.go:56] duration metric: took 17.582538ms WaitForService to wait for kubelet
	I0311 21:19:40.225877   54538 kubeadm.go:576] duration metric: took 2.988726949s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:19:40.225897   54538 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:19:40.404202   54538 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:19:40.404225   54538 node_conditions.go:123] node cpu capacity is 2
	I0311 21:19:40.404235   54538 node_conditions.go:105] duration metric: took 178.332981ms to run NodePressure ...
	I0311 21:19:40.404246   54538 start.go:240] waiting for startup goroutines ...
	I0311 21:19:40.404252   54538 start.go:245] waiting for cluster config update ...
	I0311 21:19:40.404259   54538 start.go:254] writing updated cluster config ...
	I0311 21:19:40.404512   54538 ssh_runner.go:195] Run: rm -f paused
	I0311 21:19:40.452586   54538 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:19:40.454746   54538 out.go:177] * Done! kubectl is now configured to use "pause-717098" cluster and "default" namespace by default
	I0311 21:19:37.953438   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:37.953844   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:37.953860   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:37.953806   55166 retry.go:31] will retry after 3.755944443s: waiting for machine to come up
	I0311 21:19:41.711093   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | domain NoKubernetes-364658 has defined MAC address 52:54:00:02:14:01 in network mk-NoKubernetes-364658
	I0311 21:19:41.711513   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | unable to find current IP address of domain NoKubernetes-364658 in network mk-NoKubernetes-364658
	I0311 21:19:41.711544   55133 main.go:141] libmachine: (NoKubernetes-364658) DBG | I0311 21:19:41.711487   55166 retry.go:31] will retry after 4.149286324s: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.196682389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50aea824-b930-4228-987b-bff956049da6 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.198136325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2221243c-2241-41d8-b23e-ae7333c3462d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.198632000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191983198611308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2221243c-2241-41d8-b23e-ae7333c3462d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.199558948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b332ed7c-ad56-4aaa-8287-09ddb6a83fa7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.199639511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b332ed7c-ad56-4aaa-8287-09ddb6a83fa7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.199942648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e,PodSandboxId:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710191963484176003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399,PodSandboxId:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710191963458699533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5,PodSandboxId:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710191957744448452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c,PodSandboxId:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710191957719185733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd,PodSandboxId:6ba9e5ab209c6612ae55bbe0591a9c3b56ce30d5c9c3182d6ba74df3f354e066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710191957738854827,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba
4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f,PodSandboxId:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710191957682943422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io
.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3,PodSandboxId:adace3260d57f384e8dcf6888b755b9e76a31aae458f26a922579e95d6323233,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710191941982333886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b
7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48,PodSandboxId:6fd4dc8d2482c7202a998c5f5993c6820e9b564beadfdaa21b9965370aaf8771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710191941843254468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash
: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195,PodSandboxId:d972e6b30611b062ec44b32cac71aa17440e8d1a83e27ccf5f665639d2834716,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710191941890542312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16,PodSandboxId:5cf82986b6ab25f8f1e9a349cdbfe31fd68b8bcfeabfd853dab33eed779e7dea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710191941861844362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80,PodSandboxId:99c55ca2f3976a1914bc7d3cb2f43ce3bfa612be828f3bc6cdde52861c4c5a90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710191941607295082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83,PodSandboxId:14d96b86dcc55a53f0bbba0ae24a2a042912de87a036ec37508bb46e603828f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710191941421813269,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9f22983f5021de46fba4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b332ed7c-ad56-4aaa-8287-09ddb6a83fa7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.245788860Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbca12c6-4335-4af1-b31b-cba1e0967364 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.245983098Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-qrgqd,Uid:74a57a71-c96e-42cb-83ef-45863ae77f5d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191954058749813,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:18:39.170938136Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&PodSandboxMetadata{Name:kube-proxy-4xhj5,Uid:d5c12c7e-fe54-493b-b844-75d7f9c4a002,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1710191954013527793,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:18:37.807989623Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&PodSandboxMetadata{Name:etcd-pause-717098,Uid:dd223270ff3fac7adedc3f69a104c16f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191954002824875,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.50.163:2379,kubernetes.io/config.hash: dd223270ff3fac7adedc3f69a104c16f,kubernetes.io/config.seen: 2024-03-11T21:18:25.273056205Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-717098,Uid:c0844fe6270bb1cf37846aa5811bb4e7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191954000916778,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c0844fe6270bb1cf37846aa5811bb4e7,kubernetes.io/config.seen: 2024-03-11T21:18:25.273061622Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6ba9e5ab209c6612ae55bbe0591a9c3b5
6ce30d5c9c3182d6ba74df3f354e066,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-717098,Uid:9f22983f5021de46fba4a218f1776f79,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191953964444258,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba4a218f1776f79,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.163:8443,kubernetes.io/config.hash: 9f22983f5021de46fba4a218f1776f79,kubernetes.io/config.seen: 2024-03-11T21:18:25.273060413Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-717098,Uid:e098020ed89d0b71f97088c28b03d960,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710191953944625584,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098020ed89d0b71f97088c28b03d960,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e098020ed89d0b71f97088c28b03d960,kubernetes.io/config.seen: 2024-03-11T21:18:25.273062681Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cbca12c6-4335-4af1-b31b-cba1e0967364 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.246735753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fc9c9ac-41eb-4606-818c-623d0bfee790 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.246789334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fc9c9ac-41eb-4606-818c-623d0bfee790 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.246928550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e,PodSandboxId:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710191963484176003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399,PodSandboxId:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710191963458699533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5,PodSandboxId:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710191957744448452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c,PodSandboxId:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710191957719185733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd,PodSandboxId:6ba9e5ab209c6612ae55bbe0591a9c3b56ce30d5c9c3182d6ba74df3f354e066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710191957738854827,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba
4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f,PodSandboxId:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710191957682943422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io
.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fc9c9ac-41eb-4606-818c-623d0bfee790 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.250153418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39113c5b-2d49-42e1-bff3-2be4fc84cb21 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.250233711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39113c5b-2d49-42e1-bff3-2be4fc84cb21 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.251556030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f03b1de2-967a-42ab-ab73-da23d05920c0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.251953132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191983251933038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f03b1de2-967a-42ab-ab73-da23d05920c0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.253188887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f5d869c-ac4f-4727-abbb-6f3754e43079 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.253238505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f5d869c-ac4f-4727-abbb-6f3754e43079 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.253818531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e,PodSandboxId:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710191963484176003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399,PodSandboxId:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710191963458699533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5,PodSandboxId:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710191957744448452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c,PodSandboxId:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710191957719185733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd,PodSandboxId:6ba9e5ab209c6612ae55bbe0591a9c3b56ce30d5c9c3182d6ba74df3f354e066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710191957738854827,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba
4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f,PodSandboxId:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710191957682943422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io
.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3,PodSandboxId:adace3260d57f384e8dcf6888b755b9e76a31aae458f26a922579e95d6323233,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710191941982333886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b
7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48,PodSandboxId:6fd4dc8d2482c7202a998c5f5993c6820e9b564beadfdaa21b9965370aaf8771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710191941843254468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash
: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195,PodSandboxId:d972e6b30611b062ec44b32cac71aa17440e8d1a83e27ccf5f665639d2834716,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710191941890542312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16,PodSandboxId:5cf82986b6ab25f8f1e9a349cdbfe31fd68b8bcfeabfd853dab33eed779e7dea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710191941861844362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80,PodSandboxId:99c55ca2f3976a1914bc7d3cb2f43ce3bfa612be828f3bc6cdde52861c4c5a90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710191941607295082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83,PodSandboxId:14d96b86dcc55a53f0bbba0ae24a2a042912de87a036ec37508bb46e603828f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710191941421813269,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9f22983f5021de46fba4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f5d869c-ac4f-4727-abbb-6f3754e43079 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.303182621Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c914ed24-c955-487b-8509-0e98d6f328cb name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.303280655Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c914ed24-c955-487b-8509-0e98d6f328cb name=/runtime.v1.RuntimeService/Version
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.304508022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32e1905b-150b-4bf5-bb78-6f3634a4ba5c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.304945843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710191983304923429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32e1905b-150b-4bf5-bb78-6f3634a4ba5c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.305622822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c66fe52a-962f-43cc-9c85-769c119f2b04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.305668565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c66fe52a-962f-43cc-9c85-769c119f2b04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:19:43 pause-717098 crio[2740]: time="2024-03-11 21:19:43.305909858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e,PodSandboxId:96218f26b2f31c160fc3b6799b882a3e61e23a4759aa129608cab6e3ab6308a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710191963484176003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399,PodSandboxId:c30d241d480724a425fa4df88bcc61faa15127a2f04e1c33f2c99192eab065fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710191963458699533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5,PodSandboxId:4c9bdc775ff42460706f1c47b24257168b99b15597902314a03dfa68f4123d88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710191957744448452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c,PodSandboxId:741eddbee9d0cfd575fa5dd91674c810c28d5612e47fc10435277d487bcbf968,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710191957719185733,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd,PodSandboxId:6ba9e5ab209c6612ae55bbe0591a9c3b56ce30d5c9c3182d6ba74df3f354e066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710191957738854827,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f22983f5021de46fba
4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f,PodSandboxId:aae69d33ed69cf822717622159f00c0feaa4fe58e23718829b4f293325e4d198,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710191957682943422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io
.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3,PodSandboxId:adace3260d57f384e8dcf6888b755b9e76a31aae458f26a922579e95d6323233,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710191941982333886,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xhj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5c12c7e-fe54-493b-b844-75d7f9c4a002,},Annotations:map[string]string{io.kubernetes.container.hash: 92156b
7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48,PodSandboxId:6fd4dc8d2482c7202a998c5f5993c6820e9b564beadfdaa21b9965370aaf8771,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710191941843254468,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0844fe6270bb1cf37846aa5811bb4e7,},Annotations:map[string]string{io.kubernetes.container.hash
: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195,PodSandboxId:d972e6b30611b062ec44b32cac71aa17440e8d1a83e27ccf5f665639d2834716,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710191941890542312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qrgqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a57a71-c96e-42cb-83ef-45863ae77f5d,},Annotations:map[string]string{io.kubernetes.container.hash: f20bfc5a,io.kubernetes.contain
er.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16,PodSandboxId:5cf82986b6ab25f8f1e9a349cdbfe31fd68b8bcfeabfd853dab33eed779e7dea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710191941861844362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-717098,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: dd223270ff3fac7adedc3f69a104c16f,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5a0547,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80,PodSandboxId:99c55ca2f3976a1914bc7d3cb2f43ce3bfa612be828f3bc6cdde52861c4c5a90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710191941607295082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-717098,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e098020ed89d0b71f97088c28b03d960,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83,PodSandboxId:14d96b86dcc55a53f0bbba0ae24a2a042912de87a036ec37508bb46e603828f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710191941421813269,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-717098,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9f22983f5021de46fba4a218f1776f79,},Annotations:map[string]string{io.kubernetes.container.hash: 5690005a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c66fe52a-962f-43cc-9c85-769c119f2b04 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4731318699b20       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   19 seconds ago      Running             kube-proxy                2                   96218f26b2f31       kube-proxy-4xhj5
	6b96f9fa62710       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   19 seconds ago      Running             coredns                   2                   c30d241d48072       coredns-5dd5756b68-qrgqd
	3b910fcb0a6d4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   25 seconds ago      Running             kube-controller-manager   2                   4c9bdc775ff42       kube-controller-manager-pause-717098
	333183c3c51fb       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   25 seconds ago      Running             kube-apiserver            2                   6ba9e5ab209c6       kube-apiserver-pause-717098
	77e1d20c50f14       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   25 seconds ago      Running             kube-scheduler            2                   741eddbee9d0c       kube-scheduler-pause-717098
	9f0fb9fc3c7e0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   25 seconds ago      Running             etcd                      2                   aae69d33ed69c       etcd-pause-717098
	a1922e224e43a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   41 seconds ago      Exited              kube-proxy                1                   adace3260d57f       kube-proxy-4xhj5
	57b3e179246e8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   41 seconds ago      Exited              coredns                   1                   d972e6b30611b       coredns-5dd5756b68-qrgqd
	173082cedec1a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   41 seconds ago      Exited              etcd                      1                   5cf82986b6ab2       etcd-pause-717098
	bd0976acdedcf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   41 seconds ago      Exited              kube-controller-manager   1                   6fd4dc8d2482c       kube-controller-manager-pause-717098
	36afd2df43517       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   41 seconds ago      Exited              kube-scheduler            1                   99c55ca2f3976       kube-scheduler-pause-717098
	03578cb6bf10e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   41 seconds ago      Exited              kube-apiserver            1                   14d96b86dcc55       kube-apiserver-pause-717098
	
	
	==> coredns [57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:41199 - 34979 "HINFO IN 7726036319816656465.436920015681860341. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009544172s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [6b96f9fa627107e716abca74e5ab6dfcb06ec9c4d1eb6bcda77cf31eb4b6d399] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53126 - 55337 "HINFO IN 5806598755898245643.9217482599092167676. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016182314s
	
	
	==> describe nodes <==
	Name:               pause-717098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-717098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=pause-717098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T21_18_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:18:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-717098
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:19:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:19:22 +0000   Mon, 11 Mar 2024 21:18:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:19:22 +0000   Mon, 11 Mar 2024 21:18:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:19:22 +0000   Mon, 11 Mar 2024 21:18:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:19:22 +0000   Mon, 11 Mar 2024 21:18:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.163
	  Hostname:    pause-717098
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c55693a10de425cac68f33e1c8480ff
	  System UUID:                1c55693a-10de-425c-ac68-f33e1c8480ff
	  Boot ID:                    bffd17f1-85c8-453a-8485-d94fb780e0bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qrgqd                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     66s
	  kube-system                 etcd-pause-717098                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-pause-717098             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-pause-717098    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-4xhj5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-scheduler-pause-717098             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node pause-717098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node pause-717098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)  kubelet          Node pause-717098 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-717098 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-717098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-717098 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                78s                kubelet          Node pause-717098 status is now: NodeReady
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           66s                node-controller  Node pause-717098 event: Registered Node pause-717098 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-717098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-717098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-717098 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-717098 event: Registered Node pause-717098 in Controller
	
	
	==> dmesg <==
	[  +0.074309] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.216743] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.138723] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.293701] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +5.512610] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.063754] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.884096] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.416734] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.854403] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.085710] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.472494] hrtimer: interrupt took 6477603 ns
	[ +11.582670] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +0.108304] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.411524] kauditd_printk_skb: 82 callbacks suppressed
	[ +15.009866] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[Mar11 21:19] systemd-fstab-generator[2263]: Ignoring "noauto" option for root device
	[  +0.362637] systemd-fstab-generator[2385]: Ignoring "noauto" option for root device
	[  +0.333487] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	[  +0.698848] systemd-fstab-generator[2670]: Ignoring "noauto" option for root device
	[ +11.338533] systemd-fstab-generator[2941]: Ignoring "noauto" option for root device
	[  +0.095590] kauditd_printk_skb: 169 callbacks suppressed
	[  +3.142176] systemd-fstab-generator[3335]: Ignoring "noauto" option for root device
	[  +6.687986] kauditd_printk_skb: 105 callbacks suppressed
	[ +11.427250] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.323287] systemd-fstab-generator[3768]: Ignoring "noauto" option for root device
	
	
	==> etcd [173082cedec1acbe4feda4779bb6df9def3edc0c7d34b0e451a1cd7d86c4ce16] <==
	
	
	==> etcd [9f0fb9fc3c7e0ce04610a0e6bf61bdfa522f9ce2fc195a100a2f598619b5d67f] <==
	{"level":"info","ts":"2024-03-11T21:19:18.207864Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:18.207874Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:19:18.208121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 switched to configuration voters=(541872491336215268)"}
	{"level":"info","ts":"2024-03-11T21:19:18.208208Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c04ffccd875dba59","local-member-id":"7851e28efa6aae4","added-peer-id":"7851e28efa6aae4","added-peer-peer-urls":["https://192.168.50.163:2380"]}
	{"level":"info","ts":"2024-03-11T21:19:18.208299Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c04ffccd875dba59","local-member-id":"7851e28efa6aae4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:19:18.212647Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:19:18.21817Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T21:19:18.243538Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7851e28efa6aae4","initial-advertise-peer-urls":["https://192.168.50.163:2380"],"listen-peer-urls":["https://192.168.50.163:2380"],"advertise-client-urls":["https://192.168.50.163:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.163:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T21:19:18.244571Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T21:19:18.240453Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.163:2380"}
	{"level":"info","ts":"2024-03-11T21:19:18.245899Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.163:2380"}
	{"level":"info","ts":"2024-03-11T21:19:20.086464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:20.086606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:20.08667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 received MsgPreVoteResp from 7851e28efa6aae4 at term 2"}
	{"level":"info","ts":"2024-03-11T21:19:20.086726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 became candidate at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:20.086758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 received MsgVoteResp from 7851e28efa6aae4 at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:20.086793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7851e28efa6aae4 became leader at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:20.086826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7851e28efa6aae4 elected leader 7851e28efa6aae4 at term 3"}
	{"level":"info","ts":"2024-03-11T21:19:20.093745Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7851e28efa6aae4","local-member-attributes":"{Name:pause-717098 ClientURLs:[https://192.168.50.163:2379]}","request-path":"/0/members/7851e28efa6aae4/attributes","cluster-id":"c04ffccd875dba59","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T21:19:20.09377Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:19:20.094196Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T21:19:20.094245Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T21:19:20.094266Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:19:20.095866Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.163:2379"}
	{"level":"info","ts":"2024-03-11T21:19:20.096267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:19:43 up 2 min,  0 users,  load average: 1.13, 0.45, 0.16
	Linux pause-717098 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [03578cb6bf10e3bea8c86c3ced9926dc4ea7dc66963ee5eaafd0e6c8016eff83] <==
	
	
	==> kube-apiserver [333183c3c51fba0b7f261d616d5ec022628069db7c342abd2606ee30f7f320bd] <==
	I0311 21:19:22.172706       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0311 21:19:22.172735       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0311 21:19:22.172758       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0311 21:19:22.304297       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 21:19:22.304522       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 21:19:22.305691       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0311 21:19:22.305744       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0311 21:19:22.306428       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 21:19:22.318791       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0311 21:19:22.333247       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 21:19:22.333847       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 21:19:22.334339       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 21:19:22.341415       1 aggregator.go:166] initial CRD sync complete...
	I0311 21:19:22.341521       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 21:19:22.341566       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 21:19:22.341579       1 cache.go:39] Caches are synced for autoregister controller
	E0311 21:19:22.363670       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0311 21:19:23.110226       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 21:19:24.009211       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0311 21:19:24.047208       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0311 21:19:24.108561       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0311 21:19:24.145775       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 21:19:24.157616       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0311 21:19:34.958953       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 21:19:35.170667       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3b910fcb0a6d431bdc6d4aca06fba76b8fa0dcff355becad7704b8c0ca61e6c5] <==
	I0311 21:19:34.948522       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-717098"
	I0311 21:19:34.948618       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0311 21:19:34.948695       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0311 21:19:34.948795       1 taint_manager.go:210] "Sending events to api server"
	I0311 21:19:34.948994       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0311 21:19:34.949189       1 event.go:307] "Event occurred" object="pause-717098" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-717098 event: Registered Node pause-717098 in Controller"
	I0311 21:19:34.951664       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0311 21:19:34.952006       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0311 21:19:34.955636       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0311 21:19:34.960052       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0311 21:19:34.972744       1 shared_informer.go:318] Caches are synced for daemon sets
	I0311 21:19:34.977235       1 shared_informer.go:318] Caches are synced for namespace
	I0311 21:19:34.981551       1 shared_informer.go:318] Caches are synced for TTL
	I0311 21:19:35.002608       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0311 21:19:35.006032       1 shared_informer.go:318] Caches are synced for deployment
	I0311 21:19:35.013443       1 shared_informer.go:318] Caches are synced for HPA
	I0311 21:19:35.023458       1 shared_informer.go:318] Caches are synced for attach detach
	I0311 21:19:35.052942       1 shared_informer.go:318] Caches are synced for disruption
	I0311 21:19:35.070336       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 21:19:35.099610       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 21:19:35.132608       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0311 21:19:35.156562       1 shared_informer.go:318] Caches are synced for endpoint
	I0311 21:19:35.532566       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 21:19:35.566235       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 21:19:35.566424       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [bd0976acdedcf95dd211d1f36fe72e3c5c5e504fd572d291cda16ba266bb1c48] <==
	
	
	==> kube-proxy [4731318699b2019239adccf696ddb34ca81aed86a89e5494b86519edc9033e9e] <==
	I0311 21:19:23.709549       1 server_others.go:69] "Using iptables proxy"
	I0311 21:19:23.732102       1 node.go:141] Successfully retrieved node IP: 192.168.50.163
	I0311 21:19:23.830090       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 21:19:23.830149       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:19:23.843184       1 server_others.go:152] "Using iptables Proxier"
	I0311 21:19:23.843288       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:19:23.851833       1 server.go:846] "Version info" version="v1.28.4"
	I0311 21:19:23.851888       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:19:23.853122       1 config.go:188] "Starting service config controller"
	I0311 21:19:23.853193       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:19:23.853303       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:19:23.853339       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:19:23.856217       1 config.go:315] "Starting node config controller"
	I0311 21:19:23.856259       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:19:23.954465       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:19:23.955723       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 21:19:23.957649       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3] <==
	
	
	==> kube-scheduler [36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80] <==
	
	
	==> kube-scheduler [77e1d20c50f148e5b6817d3981cee3e3a2e2dd8bf4be237185a87807fd3e8f0c] <==
	I0311 21:19:19.071490       1 serving.go:348] Generated self-signed cert in-memory
	W0311 21:19:22.266458       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 21:19:22.266758       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:19:22.267042       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 21:19:22.267191       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 21:19:22.355698       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0311 21:19:22.366477       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:19:22.376489       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 21:19:22.376710       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 21:19:22.390728       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 21:19:22.391113       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 21:19:22.477248       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:19:17 pause-717098 kubelet[3342]: I0311 21:19:17.653562    3342 scope.go:117] "RemoveContainer" containerID="36afd2df4351738709d5d1eb16a39204f4129473af32e31066b607ce341e1a80"
	Mar 11 21:19:17 pause-717098 kubelet[3342]: E0311 21:19:17.761538    3342 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-717098?timeout=10s\": dial tcp 192.168.50.163:8443: connect: connection refused" interval="800ms"
	Mar 11 21:19:17 pause-717098 kubelet[3342]: I0311 21:19:17.864787    3342 kubelet_node_status.go:70] "Attempting to register node" node="pause-717098"
	Mar 11 21:19:17 pause-717098 kubelet[3342]: E0311 21:19:17.867843    3342 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.163:8443: connect: connection refused" node="pause-717098"
	Mar 11 21:19:18 pause-717098 kubelet[3342]: W0311 21:19:18.117072    3342 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: E0311 21:19:18.117149    3342 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: E0311 21:19:18.150108    3342 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-717098.17bbd286264356b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-717098", UID:"pause-717098", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pause-717098"}, FirstTimestamp:time.Date(2024, time.March, 11, 21, 19, 17, 126633144, time.Local), LastTimestamp:time.Date(2
024, time.March, 11, 21, 19, 17, 126633144, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-717098"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.50.163:8443: connect: connection refused'(may retry after sleeping)
	Mar 11 21:19:18 pause-717098 kubelet[3342]: W0311 21:19:18.272016    3342 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: E0311 21:19:18.272070    3342 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: W0311 21:19:18.283221    3342 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-717098&limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: E0311 21:19:18.283276    3342 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-717098&limit=500&resourceVersion=0": dial tcp 192.168.50.163:8443: connect: connection refused
	Mar 11 21:19:18 pause-717098 kubelet[3342]: I0311 21:19:18.670319    3342 kubelet_node_status.go:70] "Attempting to register node" node="pause-717098"
	Mar 11 21:19:22 pause-717098 kubelet[3342]: I0311 21:19:22.348042    3342 kubelet_node_status.go:108] "Node was previously registered" node="pause-717098"
	Mar 11 21:19:22 pause-717098 kubelet[3342]: I0311 21:19:22.348162    3342 kubelet_node_status.go:73] "Successfully registered node" node="pause-717098"
	Mar 11 21:19:22 pause-717098 kubelet[3342]: I0311 21:19:22.383586    3342 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 11 21:19:22 pause-717098 kubelet[3342]: I0311 21:19:22.389536    3342 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.133624    3342 apiserver.go:52] "Watching apiserver"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.137461    3342 topology_manager.go:215] "Topology Admit Handler" podUID="d5c12c7e-fe54-493b-b844-75d7f9c4a002" podNamespace="kube-system" podName="kube-proxy-4xhj5"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.137613    3342 topology_manager.go:215] "Topology Admit Handler" podUID="74a57a71-c96e-42cb-83ef-45863ae77f5d" podNamespace="kube-system" podName="coredns-5dd5756b68-qrgqd"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.154552    3342 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.175517    3342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5c12c7e-fe54-493b-b844-75d7f9c4a002-lib-modules\") pod \"kube-proxy-4xhj5\" (UID: \"d5c12c7e-fe54-493b-b844-75d7f9c4a002\") " pod="kube-system/kube-proxy-4xhj5"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.175597    3342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5c12c7e-fe54-493b-b844-75d7f9c4a002-xtables-lock\") pod \"kube-proxy-4xhj5\" (UID: \"d5c12c7e-fe54-493b-b844-75d7f9c4a002\") " pod="kube-system/kube-proxy-4xhj5"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.439246    3342 scope.go:117] "RemoveContainer" containerID="57b3e179246e85911e4f5e610e037532173fe6ed3223da3c346ff1978e371195"
	Mar 11 21:19:23 pause-717098 kubelet[3342]: I0311 21:19:23.440077    3342 scope.go:117] "RemoveContainer" containerID="a1922e224e43ada1274d2dab49d83f21a12deb63f58141bf9ab304755f4793e3"
	Mar 11 21:19:29 pause-717098 kubelet[3342]: I0311 21:19:29.728235    3342 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-717098 -n pause-717098
helpers_test.go:261: (dbg) Run:  kubectl --context pause-717098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (61.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (291.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-239315 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-239315 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m51.514239239s)

                                                
                                                
-- stdout --
	* [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 21:23:25.423576   63745 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:23:25.423731   63745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:23:25.423743   63745 out.go:304] Setting ErrFile to fd 2...
	I0311 21:23:25.423749   63745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:23:25.424043   63745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:23:25.424841   63745 out.go:298] Setting JSON to false
	I0311 21:23:25.426320   63745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7554,"bootTime":1710184651,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:23:25.426443   63745 start.go:139] virtualization: kvm guest
	I0311 21:23:25.428577   63745 out.go:177] * [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:23:25.430388   63745 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:23:25.430433   63745 notify.go:220] Checking for updates...
	I0311 21:23:25.431707   63745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:23:25.433315   63745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:23:25.434612   63745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:23:25.435957   63745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:23:25.437258   63745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:23:25.438970   63745 config.go:182] Loaded profile config "bridge-427678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:23:25.439094   63745 config.go:182] Loaded profile config "enable-default-cni-427678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:23:25.439210   63745 config.go:182] Loaded profile config "flannel-427678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:23:25.439329   63745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:23:25.482122   63745 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 21:23:25.483601   63745 start.go:297] selected driver: kvm2
	I0311 21:23:25.483616   63745 start.go:901] validating driver "kvm2" against <nil>
	I0311 21:23:25.483625   63745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:23:25.484329   63745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:23:25.484403   63745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:23:25.499606   63745 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:23:25.499665   63745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 21:23:25.499940   63745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:23:25.499976   63745 cni.go:84] Creating CNI manager for ""
	I0311 21:23:25.499983   63745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:23:25.499988   63745 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 21:23:25.500055   63745 start.go:340] cluster config:
	{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:23:25.500175   63745 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:23:25.501958   63745 out.go:177] * Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	I0311 21:23:25.503280   63745 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:23:25.503323   63745 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:23:25.503335   63745 cache.go:56] Caching tarball of preloaded images
	I0311 21:23:25.503418   63745 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:23:25.503431   63745 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:23:25.503524   63745 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:23:25.503548   63745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json: {Name:mk1d4c77be6c093cca4dd64c973e62cb6e078121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:23:25.503702   63745 start.go:360] acquireMachinesLock for old-k8s-version-239315: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:23:42.630818   63745 start.go:364] duration metric: took 17.127060027s to acquireMachinesLock for "old-k8s-version-239315"
	I0311 21:23:42.630886   63745 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:23:42.630998   63745 start.go:125] createHost starting for "" (driver="kvm2")
	I0311 21:23:42.632550   63745 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 21:23:42.632771   63745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:23:42.632824   63745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:23:42.652390   63745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0311 21:23:42.652838   63745 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:23:42.653457   63745 main.go:141] libmachine: Using API Version  1
	I0311 21:23:42.653484   63745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:23:42.653871   63745 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:23:42.654043   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:23:42.654188   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:23:42.654323   63745 start.go:159] libmachine.API.Create for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:23:42.654351   63745 client.go:168] LocalClient.Create starting
	I0311 21:23:42.654398   63745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 21:23:42.654431   63745 main.go:141] libmachine: Decoding PEM data...
	I0311 21:23:42.654456   63745 main.go:141] libmachine: Parsing certificate...
	I0311 21:23:42.654527   63745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 21:23:42.654551   63745 main.go:141] libmachine: Decoding PEM data...
	I0311 21:23:42.654565   63745 main.go:141] libmachine: Parsing certificate...
	I0311 21:23:42.654594   63745 main.go:141] libmachine: Running pre-create checks...
	I0311 21:23:42.654606   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .PreCreateCheck
	I0311 21:23:42.654995   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:23:42.655404   63745 main.go:141] libmachine: Creating machine...
	I0311 21:23:42.655421   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .Create
	I0311 21:23:42.655556   63745 main.go:141] libmachine: (old-k8s-version-239315) Creating KVM machine...
	I0311 21:23:42.656715   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found existing default KVM network
	I0311 21:23:42.658152   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:42.657991   63911 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a7:1c:1b} reservation:<nil>}
	I0311 21:23:42.659334   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:42.659243   63911 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:db:63} reservation:<nil>}
	I0311 21:23:42.660586   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:42.660498   63911 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:7f:c4:f8} reservation:<nil>}
	I0311 21:23:42.662123   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:42.662038   63911 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003091b0}
	I0311 21:23:42.662155   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | created network xml: 
	I0311 21:23:42.662174   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | <network>
	I0311 21:23:42.662214   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |   <name>mk-old-k8s-version-239315</name>
	I0311 21:23:42.662240   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |   <dns enable='no'/>
	I0311 21:23:42.662259   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |   
	I0311 21:23:42.662273   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0311 21:23:42.662287   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |     <dhcp>
	I0311 21:23:42.662298   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0311 21:23:42.662310   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |     </dhcp>
	I0311 21:23:42.662320   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |   </ip>
	I0311 21:23:42.662344   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG |   
	I0311 21:23:42.662358   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | </network>
	I0311 21:23:42.662369   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | 
	I0311 21:23:42.668013   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | trying to create private KVM network mk-old-k8s-version-239315 192.168.72.0/24...
	I0311 21:23:42.755844   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | private KVM network mk-old-k8s-version-239315 192.168.72.0/24 created
	I0311 21:23:42.755889   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:42.755832   63911 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:23:42.755903   63745 main.go:141] libmachine: (old-k8s-version-239315) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315 ...
	I0311 21:23:42.755919   63745 main.go:141] libmachine: (old-k8s-version-239315) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 21:23:42.755939   63745 main.go:141] libmachine: (old-k8s-version-239315) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 21:23:43.010041   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:43.009884   63911 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa...
	I0311 21:23:43.123845   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:43.123729   63911 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/old-k8s-version-239315.rawdisk...
	I0311 21:23:43.123912   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Writing magic tar header
	I0311 21:23:43.123991   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Writing SSH key tar header
	I0311 21:23:43.124200   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:43.124077   63911 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315 ...
	I0311 21:23:43.124245   63745 main.go:141] libmachine: (old-k8s-version-239315) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315 (perms=drwx------)
	I0311 21:23:43.124263   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315
	I0311 21:23:43.124278   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 21:23:43.124291   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:23:43.124301   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 21:23:43.124311   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 21:23:43.124319   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Checking permissions on dir: /home/jenkins
	I0311 21:23:43.124328   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Checking permissions on dir: /home
	I0311 21:23:43.124335   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Skipping /home - not owner
	I0311 21:23:43.124350   63745 main.go:141] libmachine: (old-k8s-version-239315) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 21:23:43.124362   63745 main.go:141] libmachine: (old-k8s-version-239315) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 21:23:43.124373   63745 main.go:141] libmachine: (old-k8s-version-239315) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 21:23:43.124381   63745 main.go:141] libmachine: (old-k8s-version-239315) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 21:23:43.124390   63745 main.go:141] libmachine: (old-k8s-version-239315) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 21:23:43.124397   63745 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:23:43.125783   63745 main.go:141] libmachine: (old-k8s-version-239315) define libvirt domain using xml: 
	I0311 21:23:43.125801   63745 main.go:141] libmachine: (old-k8s-version-239315) <domain type='kvm'>
	I0311 21:23:43.125821   63745 main.go:141] libmachine: (old-k8s-version-239315)   <name>old-k8s-version-239315</name>
	I0311 21:23:43.125829   63745 main.go:141] libmachine: (old-k8s-version-239315)   <memory unit='MiB'>2200</memory>
	I0311 21:23:43.125836   63745 main.go:141] libmachine: (old-k8s-version-239315)   <vcpu>2</vcpu>
	I0311 21:23:43.125843   63745 main.go:141] libmachine: (old-k8s-version-239315)   <features>
	I0311 21:23:43.125874   63745 main.go:141] libmachine: (old-k8s-version-239315)     <acpi/>
	I0311 21:23:43.125881   63745 main.go:141] libmachine: (old-k8s-version-239315)     <apic/>
	I0311 21:23:43.125889   63745 main.go:141] libmachine: (old-k8s-version-239315)     <pae/>
	I0311 21:23:43.125895   63745 main.go:141] libmachine: (old-k8s-version-239315)     
	I0311 21:23:43.125910   63745 main.go:141] libmachine: (old-k8s-version-239315)   </features>
	I0311 21:23:43.125916   63745 main.go:141] libmachine: (old-k8s-version-239315)   <cpu mode='host-passthrough'>
	I0311 21:23:43.125923   63745 main.go:141] libmachine: (old-k8s-version-239315)   
	I0311 21:23:43.125928   63745 main.go:141] libmachine: (old-k8s-version-239315)   </cpu>
	I0311 21:23:43.125940   63745 main.go:141] libmachine: (old-k8s-version-239315)   <os>
	I0311 21:23:43.125947   63745 main.go:141] libmachine: (old-k8s-version-239315)     <type>hvm</type>
	I0311 21:23:43.125955   63745 main.go:141] libmachine: (old-k8s-version-239315)     <boot dev='cdrom'/>
	I0311 21:23:43.125962   63745 main.go:141] libmachine: (old-k8s-version-239315)     <boot dev='hd'/>
	I0311 21:23:43.125970   63745 main.go:141] libmachine: (old-k8s-version-239315)     <bootmenu enable='no'/>
	I0311 21:23:43.125976   63745 main.go:141] libmachine: (old-k8s-version-239315)   </os>
	I0311 21:23:43.125986   63745 main.go:141] libmachine: (old-k8s-version-239315)   <devices>
	I0311 21:23:43.125994   63745 main.go:141] libmachine: (old-k8s-version-239315)     <disk type='file' device='cdrom'>
	I0311 21:23:43.126007   63745 main.go:141] libmachine: (old-k8s-version-239315)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/boot2docker.iso'/>
	I0311 21:23:43.126015   63745 main.go:141] libmachine: (old-k8s-version-239315)       <target dev='hdc' bus='scsi'/>
	I0311 21:23:43.126024   63745 main.go:141] libmachine: (old-k8s-version-239315)       <readonly/>
	I0311 21:23:43.126031   63745 main.go:141] libmachine: (old-k8s-version-239315)     </disk>
	I0311 21:23:43.126040   63745 main.go:141] libmachine: (old-k8s-version-239315)     <disk type='file' device='disk'>
	I0311 21:23:43.126048   63745 main.go:141] libmachine: (old-k8s-version-239315)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 21:23:43.126066   63745 main.go:141] libmachine: (old-k8s-version-239315)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/old-k8s-version-239315.rawdisk'/>
	I0311 21:23:43.126075   63745 main.go:141] libmachine: (old-k8s-version-239315)       <target dev='hda' bus='virtio'/>
	I0311 21:23:43.126083   63745 main.go:141] libmachine: (old-k8s-version-239315)     </disk>
	I0311 21:23:43.126091   63745 main.go:141] libmachine: (old-k8s-version-239315)     <interface type='network'>
	I0311 21:23:43.126101   63745 main.go:141] libmachine: (old-k8s-version-239315)       <source network='mk-old-k8s-version-239315'/>
	I0311 21:23:43.126108   63745 main.go:141] libmachine: (old-k8s-version-239315)       <model type='virtio'/>
	I0311 21:23:43.126117   63745 main.go:141] libmachine: (old-k8s-version-239315)     </interface>
	I0311 21:23:43.126124   63745 main.go:141] libmachine: (old-k8s-version-239315)     <interface type='network'>
	I0311 21:23:43.126134   63745 main.go:141] libmachine: (old-k8s-version-239315)       <source network='default'/>
	I0311 21:23:43.126141   63745 main.go:141] libmachine: (old-k8s-version-239315)       <model type='virtio'/>
	I0311 21:23:43.126152   63745 main.go:141] libmachine: (old-k8s-version-239315)     </interface>
	I0311 21:23:43.126159   63745 main.go:141] libmachine: (old-k8s-version-239315)     <serial type='pty'>
	I0311 21:23:43.126167   63745 main.go:141] libmachine: (old-k8s-version-239315)       <target port='0'/>
	I0311 21:23:43.126174   63745 main.go:141] libmachine: (old-k8s-version-239315)     </serial>
	I0311 21:23:43.126188   63745 main.go:141] libmachine: (old-k8s-version-239315)     <console type='pty'>
	I0311 21:23:43.126195   63745 main.go:141] libmachine: (old-k8s-version-239315)       <target type='serial' port='0'/>
	I0311 21:23:43.126203   63745 main.go:141] libmachine: (old-k8s-version-239315)     </console>
	I0311 21:23:43.126210   63745 main.go:141] libmachine: (old-k8s-version-239315)     <rng model='virtio'>
	I0311 21:23:43.126219   63745 main.go:141] libmachine: (old-k8s-version-239315)       <backend model='random'>/dev/random</backend>
	I0311 21:23:43.126225   63745 main.go:141] libmachine: (old-k8s-version-239315)     </rng>
	I0311 21:23:43.126233   63745 main.go:141] libmachine: (old-k8s-version-239315)     
	I0311 21:23:43.126239   63745 main.go:141] libmachine: (old-k8s-version-239315)     
	I0311 21:23:43.126247   63745 main.go:141] libmachine: (old-k8s-version-239315)   </devices>
	I0311 21:23:43.126253   63745 main.go:141] libmachine: (old-k8s-version-239315) </domain>
	I0311 21:23:43.126264   63745 main.go:141] libmachine: (old-k8s-version-239315) 
	I0311 21:23:43.130135   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:dd:99:0e in network default
	I0311 21:23:43.130891   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:43.130913   63745 main.go:141] libmachine: (old-k8s-version-239315) Ensuring networks are active...
	I0311 21:23:43.131811   63745 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network default is active
	I0311 21:23:43.132235   63745 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network mk-old-k8s-version-239315 is active
	I0311 21:23:43.132829   63745 main.go:141] libmachine: (old-k8s-version-239315) Getting domain xml...
	I0311 21:23:43.133603   63745 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:23:44.602645   63745 main.go:141] libmachine: (old-k8s-version-239315) Waiting to get IP...
	I0311 21:23:44.603387   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:44.603862   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:44.603922   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:44.603860   63911 retry.go:31] will retry after 239.686243ms: waiting for machine to come up
	I0311 21:23:44.845795   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:44.846369   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:44.846393   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:44.846322   63911 retry.go:31] will retry after 336.3542ms: waiting for machine to come up
	I0311 21:23:45.183841   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:45.184350   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:45.184381   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:45.184328   63911 retry.go:31] will retry after 404.660802ms: waiting for machine to come up
	I0311 21:23:45.591035   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:45.591651   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:45.591690   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:45.591602   63911 retry.go:31] will retry after 413.834521ms: waiting for machine to come up
	I0311 21:23:46.007020   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:46.007569   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:46.007598   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:46.007541   63911 retry.go:31] will retry after 668.863679ms: waiting for machine to come up
	I0311 21:23:46.677990   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:46.678449   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:46.678483   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:46.678395   63911 retry.go:31] will retry after 840.803863ms: waiting for machine to come up
	I0311 21:23:47.521223   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:47.521666   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:47.521692   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:47.521588   63911 retry.go:31] will retry after 1.020629438s: waiting for machine to come up
	I0311 21:23:48.544232   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:48.544833   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:48.544865   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:48.544787   63911 retry.go:31] will retry after 1.332842478s: waiting for machine to come up
	I0311 21:23:49.879397   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:49.880479   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:49.880500   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:49.880183   63911 retry.go:31] will retry after 1.690370074s: waiting for machine to come up
	I0311 21:23:51.572540   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:51.573209   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:51.573237   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:51.573153   63911 retry.go:31] will retry after 2.041129416s: waiting for machine to come up
	I0311 21:23:53.615765   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:53.616345   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:53.616372   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:53.616275   63911 retry.go:31] will retry after 2.719242718s: waiting for machine to come up
	I0311 21:23:56.338416   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:56.338953   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:56.338984   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:56.338880   63911 retry.go:31] will retry after 3.210319334s: waiting for machine to come up
	I0311 21:23:59.551394   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:23:59.551932   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:23:59.551962   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:23:59.551897   63911 retry.go:31] will retry after 2.872302706s: waiting for machine to come up
	I0311 21:24:02.425932   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:02.426419   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:24:02.426444   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:24:02.426376   63911 retry.go:31] will retry after 4.705116933s: waiting for machine to come up
	I0311 21:24:07.133789   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.134242   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has current primary IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.134270   63745 main.go:141] libmachine: (old-k8s-version-239315) Found IP for machine: 192.168.72.52
	I0311 21:24:07.134286   63745 main.go:141] libmachine: (old-k8s-version-239315) Reserving static IP address...
	I0311 21:24:07.134580   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"} in network mk-old-k8s-version-239315
	I0311 21:24:07.210561   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Getting to WaitForSSH function...
	I0311 21:24:07.210593   63745 main.go:141] libmachine: (old-k8s-version-239315) Reserved static IP address: 192.168.72.52
	I0311 21:24:07.210609   63745 main.go:141] libmachine: (old-k8s-version-239315) Waiting for SSH to be available...
	I0311 21:24:07.213435   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.213861   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:07.213892   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.213997   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH client type: external
	I0311 21:24:07.214024   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa (-rw-------)
	I0311 21:24:07.214053   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:24:07.214077   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | About to run SSH command:
	I0311 21:24:07.214089   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | exit 0
	I0311 21:24:07.341132   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | SSH cmd err, output: <nil>: 
	I0311 21:24:07.341423   63745 main.go:141] libmachine: (old-k8s-version-239315) KVM machine creation complete!
	I0311 21:24:07.341865   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:24:07.342384   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:24:07.342585   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:24:07.342779   63745 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 21:24:07.342799   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetState
	I0311 21:24:07.344383   63745 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 21:24:07.344400   63745 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 21:24:07.344409   63745 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 21:24:07.344419   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:07.347347   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.347756   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:07.347787   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.347927   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:07.348133   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.348317   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.348455   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:07.348653   63745 main.go:141] libmachine: Using SSH client type: native
	I0311 21:24:07.348893   63745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:24:07.348908   63745 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 21:24:07.452039   63745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:24:07.452064   63745 main.go:141] libmachine: Detecting the provisioner...
	I0311 21:24:07.452090   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:07.455167   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.455536   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:07.455568   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.455757   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:07.455964   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.456172   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.456332   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:07.456529   63745 main.go:141] libmachine: Using SSH client type: native
	I0311 21:24:07.456698   63745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:24:07.456708   63745 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 21:24:07.562826   63745 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 21:24:07.562906   63745 main.go:141] libmachine: found compatible host: buildroot
	I0311 21:24:07.562919   63745 main.go:141] libmachine: Provisioning with buildroot...
	I0311 21:24:07.562930   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:24:07.563186   63745 buildroot.go:166] provisioning hostname "old-k8s-version-239315"
	I0311 21:24:07.563217   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:24:07.563409   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:07.566128   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.566469   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:07.566500   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.566580   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:07.566798   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.566990   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.567150   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:07.567339   63745 main.go:141] libmachine: Using SSH client type: native
	I0311 21:24:07.567542   63745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:24:07.567556   63745 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239315 && echo "old-k8s-version-239315" | sudo tee /etc/hostname
	I0311 21:24:07.693929   63745 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239315
	
	I0311 21:24:07.693965   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:07.696797   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.697105   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:07.697130   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.697279   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:07.697479   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.697631   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.697776   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:07.697961   63745 main.go:141] libmachine: Using SSH client type: native
	I0311 21:24:07.698196   63745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:24:07.698218   63745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239315/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:24:07.820675   63745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:24:07.820705   63745 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:24:07.820727   63745 buildroot.go:174] setting up certificates
	I0311 21:24:07.820752   63745 provision.go:84] configureAuth start
	I0311 21:24:07.820766   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:24:07.821053   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:24:07.823544   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.823885   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:07.823913   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.824063   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:07.825969   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.826284   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:07.826314   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.826468   63745 provision.go:143] copyHostCerts
	I0311 21:24:07.826541   63745 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:24:07.826553   63745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:24:07.826603   63745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:24:07.826682   63745 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:24:07.826690   63745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:24:07.826708   63745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:24:07.826755   63745 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:24:07.826762   63745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:24:07.826778   63745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:24:07.826817   63745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239315 san=[127.0.0.1 192.168.72.52 localhost minikube old-k8s-version-239315]
	I0311 21:24:07.941187   63745 provision.go:177] copyRemoteCerts
	I0311 21:24:07.941266   63745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:24:07.941296   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:07.943778   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.944125   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:07.944152   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:07.944302   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:07.944542   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:07.944704   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:07.944884   63745 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:24:08.030770   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:24:08.059736   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 21:24:08.086277   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:24:08.114517   63745 provision.go:87] duration metric: took 293.751532ms to configureAuth
	I0311 21:24:08.114547   63745 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:24:08.114772   63745 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:24:08.114875   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:08.118073   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.118546   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:08.118574   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.118779   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:08.118976   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:08.119155   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:08.119372   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:08.119561   63745 main.go:141] libmachine: Using SSH client type: native
	I0311 21:24:08.119768   63745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:24:08.119791   63745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:24:08.397191   63745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:24:08.397218   63745 main.go:141] libmachine: Checking connection to Docker...
	I0311 21:24:08.397226   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetURL
	I0311 21:24:08.398626   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using libvirt version 6000000
	I0311 21:24:08.400653   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.401116   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:08.401144   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.401320   63745 main.go:141] libmachine: Docker is up and running!
	I0311 21:24:08.401337   63745 main.go:141] libmachine: Reticulating splines...
	I0311 21:24:08.401344   63745 client.go:171] duration metric: took 25.74698382s to LocalClient.Create
	I0311 21:24:08.401364   63745 start.go:167] duration metric: took 25.74704228s to libmachine.API.Create "old-k8s-version-239315"
	I0311 21:24:08.401373   63745 start.go:293] postStartSetup for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:24:08.401383   63745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:24:08.401399   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:24:08.401618   63745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:24:08.401642   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:08.403901   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.404200   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:08.404218   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.404418   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:08.404596   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:08.404776   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:08.404931   63745 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:24:08.489050   63745 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:24:08.493940   63745 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:24:08.493962   63745 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:24:08.494020   63745 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:24:08.494111   63745 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:24:08.494223   63745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:24:08.505778   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:24:08.535897   63745 start.go:296] duration metric: took 134.512608ms for postStartSetup
	I0311 21:24:08.535950   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:24:08.536563   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:24:08.539485   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.539873   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:08.539905   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.540229   63745 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:24:08.540457   63745 start.go:128] duration metric: took 25.90944665s to createHost
	I0311 21:24:08.540480   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:08.542898   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.543318   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:08.543354   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.543489   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:08.543697   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:08.543884   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:08.544046   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:08.544238   63745 main.go:141] libmachine: Using SSH client type: native
	I0311 21:24:08.544444   63745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:24:08.544459   63745 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0311 21:24:08.655527   63745 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192248.601213584
	
	I0311 21:24:08.655556   63745 fix.go:216] guest clock: 1710192248.601213584
	I0311 21:24:08.655567   63745 fix.go:229] Guest: 2024-03-11 21:24:08.601213584 +0000 UTC Remote: 2024-03-11 21:24:08.540470013 +0000 UTC m=+43.180035009 (delta=60.743571ms)
	I0311 21:24:08.655591   63745 fix.go:200] guest clock delta is within tolerance: 60.743571ms
	I0311 21:24:08.655599   63745 start.go:83] releasing machines lock for "old-k8s-version-239315", held for 26.024743191s
	I0311 21:24:08.655624   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:24:08.655943   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:24:08.659030   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.659475   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:08.659507   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.659768   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:24:08.660314   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:24:08.660494   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:24:08.660589   63745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:24:08.660634   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:08.660724   63745 ssh_runner.go:195] Run: cat /version.json
	I0311 21:24:08.660771   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:24:08.663431   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.663618   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.663833   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:08.663868   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.663983   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:08.664135   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:08.664142   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:08.664165   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:08.664242   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:08.664290   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:24:08.664453   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:24:08.664462   63745 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:24:08.664609   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:24:08.664756   63745 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:24:08.750352   63745 ssh_runner.go:195] Run: systemctl --version
	I0311 21:24:08.773015   63745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:24:08.951167   63745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:24:08.959875   63745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:24:08.959958   63745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:24:08.982012   63745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:24:08.982032   63745 start.go:494] detecting cgroup driver to use...
	I0311 21:24:08.982108   63745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:24:09.003023   63745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:24:09.020084   63745 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:24:09.020134   63745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:24:09.038738   63745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:24:09.060011   63745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:24:09.214488   63745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:24:09.386037   63745 docker.go:233] disabling docker service ...
	I0311 21:24:09.386118   63745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:24:09.404707   63745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:24:09.426398   63745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:24:09.634959   63745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:24:09.797080   63745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:24:09.813529   63745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:24:09.837762   63745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:24:09.837824   63745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:24:09.850857   63745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:24:09.850918   63745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:24:09.863945   63745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:24:09.877962   63745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:24:09.889768   63745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:24:09.903483   63745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:24:09.917443   63745 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:24:09.917498   63745 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:24:09.935583   63745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:24:09.949991   63745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:24:10.123625   63745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:24:10.295549   63745 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:24:10.295636   63745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:24:10.302449   63745 start.go:562] Will wait 60s for crictl version
	I0311 21:24:10.302500   63745 ssh_runner.go:195] Run: which crictl
	I0311 21:24:10.307104   63745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:24:10.356071   63745 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:24:10.356123   63745 ssh_runner.go:195] Run: crio --version
	I0311 21:24:10.398097   63745 ssh_runner.go:195] Run: crio --version
	I0311 21:24:10.434553   63745 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:24:10.436631   63745 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:24:10.439439   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:10.439808   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:24:00 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:24:10.439842   63745 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:24:10.440024   63745 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:24:10.445892   63745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:24:10.462772   63745 kubeadm.go:877] updating cluster {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:24:10.462919   63745 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:24:10.463021   63745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:24:10.503009   63745 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:24:10.503085   63745 ssh_runner.go:195] Run: which lz4
	I0311 21:24:10.507771   63745 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0311 21:24:10.512601   63745 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:24:10.512624   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:24:12.608035   63745 crio.go:444] duration metric: took 2.1002854s to copy over tarball
	I0311 21:24:12.608110   63745 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:24:16.012374   63745 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.404236515s)
	I0311 21:24:16.012395   63745 crio.go:451] duration metric: took 3.404337578s to extract the tarball
	I0311 21:24:16.012402   63745 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:24:16.065524   63745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:24:16.149779   63745 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:24:16.149809   63745 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:24:16.149874   63745 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:24:16.150152   63745 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:24:16.150334   63745 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:24:16.150519   63745 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:24:16.150680   63745 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:24:16.150812   63745 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:24:16.150918   63745 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:24:16.151020   63745 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:24:16.153043   63745 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:24:16.153134   63745 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:24:16.153237   63745 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:24:16.153697   63745 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:24:16.154064   63745 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:24:16.154097   63745 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:24:16.154158   63745 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:24:16.154238   63745 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:24:16.307853   63745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:24:16.309853   63745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:24:16.311226   63745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:24:16.329794   63745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:24:16.334030   63745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:24:16.343839   63745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:24:16.363248   63745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:24:16.434985   63745 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:24:16.479614   63745 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:24:16.479654   63745 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:24:16.479691   63745 ssh_runner.go:195] Run: which crictl
	I0311 21:24:16.524672   63745 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:24:16.524719   63745 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:24:16.524785   63745 ssh_runner.go:195] Run: which crictl
	I0311 21:24:16.524911   63745 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:24:16.524937   63745 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:24:16.524964   63745 ssh_runner.go:195] Run: which crictl
	I0311 21:24:16.620935   63745 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:24:16.620975   63745 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:24:16.621022   63745 ssh_runner.go:195] Run: which crictl
	I0311 21:24:16.621121   63745 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:24:16.621147   63745 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:24:16.621181   63745 ssh_runner.go:195] Run: which crictl
	I0311 21:24:16.627571   63745 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:24:16.627606   63745 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:24:16.627644   63745 ssh_runner.go:195] Run: which crictl
	I0311 21:24:16.627655   63745 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:24:16.627690   63745 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:24:16.627725   63745 ssh_runner.go:195] Run: which crictl
	I0311 21:24:16.755113   63745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:24:16.755186   63745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:24:16.755221   63745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:24:16.755271   63745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:24:16.755306   63745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:24:16.755348   63745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:24:16.755378   63745 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:24:16.975164   63745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:24:16.975210   63745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:24:16.975254   63745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:24:16.975310   63745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:24:16.975371   63745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:24:16.975412   63745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:24:16.975465   63745 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:24:16.975502   63745 cache_images.go:92] duration metric: took 825.677055ms to LoadCachedImages
	W0311 21:24:16.975570   63745 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0311 21:24:16.975590   63745 kubeadm.go:928] updating node { 192.168.72.52 8443 v1.20.0 crio true true} ...
	I0311 21:24:16.975703   63745 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239315 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:24:16.975779   63745 ssh_runner.go:195] Run: crio config
	I0311 21:24:17.060584   63745 cni.go:84] Creating CNI manager for ""
	I0311 21:24:17.060606   63745 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:24:17.060620   63745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:24:17.060640   63745 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239315 NodeName:old-k8s-version-239315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:24:17.060792   63745 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239315"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:24:17.060856   63745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:24:17.076060   63745 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:24:17.076134   63745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:24:17.094549   63745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0311 21:24:17.116452   63745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:24:17.137310   63745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0311 21:24:17.159332   63745 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0311 21:24:17.169229   63745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:24:17.188722   63745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:24:17.348877   63745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:24:17.375222   63745 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315 for IP: 192.168.72.52
	I0311 21:24:17.375250   63745 certs.go:194] generating shared ca certs ...
	I0311 21:24:17.375274   63745 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:24:17.375427   63745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:24:17.375489   63745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:24:17.375510   63745 certs.go:256] generating profile certs ...
	I0311 21:24:17.375610   63745 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key
	I0311 21:24:17.375635   63745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.crt with IP's: []
	I0311 21:24:17.698403   63745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.crt ...
	I0311 21:24:17.698433   63745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.crt: {Name:mkdbf6d4f3f1e07bf72b429104899d1e79a254a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:24:17.698703   63745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key ...
	I0311 21:24:17.698750   63745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key: {Name:mk1c9660c0697ee2974af1d48018a6c224632dac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:24:17.698893   63745 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1
	I0311 21:24:17.698922   63745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt.1e888bb1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.52]
	I0311 21:24:18.409911   63745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt.1e888bb1 ...
	I0311 21:24:18.409941   63745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt.1e888bb1: {Name:mk8210ac89b810efdae8f50944cc64c6fca45261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:24:18.418875   63745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1 ...
	I0311 21:24:18.418905   63745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1: {Name:mkce763d3a36659a237e1b5919f652d5d06e30b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:24:18.419056   63745 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt.1e888bb1 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt
	I0311 21:24:18.419163   63745 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key
	I0311 21:24:18.419250   63745 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key
	I0311 21:24:18.419272   63745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt with IP's: []
	I0311 21:24:18.691321   63745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt ...
	I0311 21:24:18.691395   63745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt: {Name:mk01951e50cd1d77e617af1a7d767cd25703f957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:24:18.691595   63745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key ...
	I0311 21:24:18.691639   63745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key: {Name:mk4d80047d2a6797c981d986d8daca6e63a38c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:24:18.691870   63745 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:24:18.691932   63745 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:24:18.691952   63745 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:24:18.692001   63745 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:24:18.692046   63745 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:24:18.692105   63745 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:24:18.692167   63745 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:24:18.693013   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:24:18.735845   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:24:18.770635   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:24:18.808636   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:24:18.854990   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 21:24:18.894571   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0311 21:24:18.927455   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:24:18.964378   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:24:19.005830   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:24:19.037581   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:24:19.071319   63745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:24:19.104549   63745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:24:19.127445   63745 ssh_runner.go:195] Run: openssl version
	I0311 21:24:19.135014   63745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:24:19.151903   63745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:24:19.159025   63745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:24:19.159073   63745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:24:19.166478   63745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:24:19.186567   63745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:24:19.202828   63745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:24:19.209942   63745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:24:19.209986   63745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:24:19.218204   63745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:24:19.233468   63745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:24:19.250781   63745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:24:19.257722   63745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:24:19.257771   63745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:24:19.268049   63745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:24:19.294598   63745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:24:19.301402   63745 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 21:24:19.301462   63745 kubeadm.go:391] StartCluster: {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:24:19.301558   63745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:24:19.301717   63745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:24:19.367471   63745 cri.go:89] found id: ""
	I0311 21:24:19.367539   63745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 21:24:19.383963   63745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:24:19.400305   63745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:24:19.416541   63745 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:24:19.416566   63745 kubeadm.go:156] found existing configuration files:
	
	I0311 21:24:19.416612   63745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:24:19.431447   63745 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:24:19.431500   63745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:24:19.449552   63745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:24:19.462819   63745 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:24:19.462883   63745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:24:19.479708   63745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:24:19.509203   63745 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:24:19.509262   63745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:24:19.533158   63745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:24:19.565035   63745 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:24:19.565093   63745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:24:19.592948   63745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:24:19.796671   63745 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:24:19.796764   63745 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:24:20.031874   63745 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:24:20.031968   63745 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:24:20.032044   63745 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:24:20.348218   63745 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:24:20.351549   63745 out.go:204]   - Generating certificates and keys ...
	I0311 21:24:20.352941   63745 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:24:20.353042   63745 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:24:20.513462   63745 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 21:24:20.700010   63745 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 21:24:20.828299   63745 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 21:24:21.160775   63745 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 21:24:21.328042   63745 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 21:24:21.328463   63745 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-239315] and IPs [192.168.72.52 127.0.0.1 ::1]
	I0311 21:24:21.420963   63745 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 21:24:21.425068   63745 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-239315] and IPs [192.168.72.52 127.0.0.1 ::1]
	I0311 21:24:21.838757   63745 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 21:24:21.963879   63745 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 21:24:22.145437   63745 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 21:24:22.145866   63745 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:24:22.245469   63745 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:24:22.360169   63745 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:24:22.600029   63745 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:24:22.745647   63745 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:24:22.769635   63745 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:24:22.769751   63745 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:24:22.769800   63745 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:24:23.004907   63745 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:24:23.007194   63745 out.go:204]   - Booting up control plane ...
	I0311 21:24:23.007309   63745 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:24:23.014432   63745 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:24:23.016115   63745 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:24:23.027973   63745 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:24:23.033024   63745 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:25:02.982037   63745 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:25:02.982146   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:25:02.982389   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:25:07.982366   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:25:07.982619   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:25:17.982512   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:25:17.982728   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:25:37.983437   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:25:37.983703   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:26:17.985101   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:26:17.985359   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:26:17.985379   63745 kubeadm.go:309] 
	I0311 21:26:17.985427   63745 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:26:17.985488   63745 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:26:17.985498   63745 kubeadm.go:309] 
	I0311 21:26:17.985540   63745 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:26:17.985587   63745 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:26:17.985733   63745 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:26:17.985752   63745 kubeadm.go:309] 
	I0311 21:26:17.985880   63745 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:26:17.985926   63745 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:26:17.985987   63745 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:26:17.985995   63745 kubeadm.go:309] 
	I0311 21:26:17.986125   63745 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:26:17.986233   63745 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:26:17.986245   63745 kubeadm.go:309] 
	I0311 21:26:17.986393   63745 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:26:17.986531   63745 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:26:17.986646   63745 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:26:17.986759   63745 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:26:17.986770   63745 kubeadm.go:309] 
	I0311 21:26:17.987839   63745 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:26:17.987989   63745 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:26:17.988097   63745 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0311 21:26:17.988239   63745 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-239315] and IPs [192.168.72.52 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-239315] and IPs [192.168.72.52 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-239315] and IPs [192.168.72.52 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-239315] and IPs [192.168.72.52 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:26:17.988291   63745 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:26:19.518192   63745 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.529878179s)
	I0311 21:26:19.518273   63745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:26:19.534335   63745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:26:19.545973   63745 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:26:19.545998   63745 kubeadm.go:156] found existing configuration files:
	
	I0311 21:26:19.546069   63745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:26:19.556856   63745 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:26:19.556919   63745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:26:19.568082   63745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:26:19.578609   63745 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:26:19.578665   63745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:26:19.590523   63745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:26:19.601234   63745 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:26:19.601297   63745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:26:19.612394   63745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:26:19.622439   63745 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:26:19.622497   63745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:26:19.633778   63745 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:26:19.900175   63745 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:28:16.232636   63745 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:28:16.232716   63745 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:28:16.234457   63745 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:28:16.234520   63745 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:28:16.234613   63745 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:28:16.234747   63745 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:28:16.234907   63745 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:28:16.234992   63745 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:28:16.237176   63745 out.go:204]   - Generating certificates and keys ...
	I0311 21:28:16.237267   63745 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:28:16.237335   63745 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:28:16.237412   63745 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:28:16.237466   63745 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:28:16.237532   63745 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:28:16.237608   63745 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:28:16.237704   63745 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:28:16.237795   63745 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:28:16.237889   63745 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:28:16.237973   63745 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:28:16.238009   63745 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:28:16.238091   63745 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:28:16.238148   63745 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:28:16.238194   63745 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:28:16.238257   63745 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:28:16.238318   63745 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:28:16.238440   63745 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:28:16.238553   63745 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:28:16.238607   63745 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:28:16.238663   63745 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:28:16.240259   63745 out.go:204]   - Booting up control plane ...
	I0311 21:28:16.240327   63745 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:28:16.240417   63745 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:28:16.240490   63745 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:28:16.240583   63745 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:28:16.240805   63745 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:28:16.240864   63745 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:28:16.240923   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:28:16.241095   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:28:16.241163   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:28:16.241328   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:28:16.241393   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:28:16.241546   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:28:16.241603   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:28:16.241760   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:28:16.241820   63745 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:28:16.241992   63745 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:28:16.242010   63745 kubeadm.go:309] 
	I0311 21:28:16.242058   63745 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:28:16.242124   63745 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:28:16.242134   63745 kubeadm.go:309] 
	I0311 21:28:16.242185   63745 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:28:16.242226   63745 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:28:16.242342   63745 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:28:16.242349   63745 kubeadm.go:309] 
	I0311 21:28:16.242446   63745 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:28:16.242478   63745 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:28:16.242506   63745 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:28:16.242513   63745 kubeadm.go:309] 
	I0311 21:28:16.242597   63745 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:28:16.242668   63745 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:28:16.242675   63745 kubeadm.go:309] 
	I0311 21:28:16.242780   63745 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:28:16.242856   63745 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:28:16.242923   63745 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:28:16.243019   63745 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:28:16.243032   63745 kubeadm.go:309] 
	I0311 21:28:16.243122   63745 kubeadm.go:393] duration metric: took 3m56.94166406s to StartCluster
	I0311 21:28:16.243173   63745 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:28:16.243230   63745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:28:16.290041   63745 cri.go:89] found id: ""
	I0311 21:28:16.290067   63745 logs.go:276] 0 containers: []
	W0311 21:28:16.290076   63745 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:28:16.290084   63745 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:28:16.290148   63745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:28:16.328917   63745 cri.go:89] found id: ""
	I0311 21:28:16.328942   63745 logs.go:276] 0 containers: []
	W0311 21:28:16.328952   63745 logs.go:278] No container was found matching "etcd"
	I0311 21:28:16.328960   63745 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:28:16.329020   63745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:28:16.365933   63745 cri.go:89] found id: ""
	I0311 21:28:16.365960   63745 logs.go:276] 0 containers: []
	W0311 21:28:16.365971   63745 logs.go:278] No container was found matching "coredns"
	I0311 21:28:16.365979   63745 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:28:16.366035   63745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:28:16.403497   63745 cri.go:89] found id: ""
	I0311 21:28:16.403524   63745 logs.go:276] 0 containers: []
	W0311 21:28:16.403534   63745 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:28:16.403542   63745 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:28:16.403596   63745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:28:16.450614   63745 cri.go:89] found id: ""
	I0311 21:28:16.450634   63745 logs.go:276] 0 containers: []
	W0311 21:28:16.450640   63745 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:28:16.450647   63745 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:28:16.450696   63745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:28:16.486181   63745 cri.go:89] found id: ""
	I0311 21:28:16.486204   63745 logs.go:276] 0 containers: []
	W0311 21:28:16.486212   63745 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:28:16.486218   63745 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:28:16.486261   63745 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:28:16.520959   63745 cri.go:89] found id: ""
	I0311 21:28:16.520983   63745 logs.go:276] 0 containers: []
	W0311 21:28:16.520993   63745 logs.go:278] No container was found matching "kindnet"
	I0311 21:28:16.521004   63745 logs.go:123] Gathering logs for kubelet ...
	I0311 21:28:16.521019   63745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:28:16.570743   63745 logs.go:123] Gathering logs for dmesg ...
	I0311 21:28:16.570768   63745 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:28:16.584567   63745 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:28:16.584591   63745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:28:16.704504   63745 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:28:16.704524   63745 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:28:16.704537   63745 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:28:16.797145   63745 logs.go:123] Gathering logs for container status ...
	I0311 21:28:16.797179   63745 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0311 21:28:16.855253   63745 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:28:16.855304   63745 out.go:239] * 
	* 
	W0311 21:28:16.855369   63745 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:28:16.855400   63745 out.go:239] * 
	* 
	W0311 21:28:16.856546   63745 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:28:16.860533   63745 out.go:177] 
	W0311 21:28:16.861994   63745 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:28:16.862061   63745 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:28:16.862092   63745 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:28:16.864589   63745 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-239315 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 6 (234.62869ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:28:17.145793   69928 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-239315" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (291.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-766430 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-766430 --alsologtostderr -v=3: exit status 82 (2m0.568672032s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-766430"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 21:26:32.854003   69446 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:26:32.854101   69446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:26:32.854114   69446 out.go:304] Setting ErrFile to fd 2...
	I0311 21:26:32.854118   69446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:26:32.854323   69446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:26:32.854543   69446 out.go:298] Setting JSON to false
	I0311 21:26:32.854619   69446 mustload.go:65] Loading cluster: default-k8s-diff-port-766430
	I0311 21:26:32.854934   69446 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:26:32.854995   69446 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/config.json ...
	I0311 21:26:32.855155   69446 mustload.go:65] Loading cluster: default-k8s-diff-port-766430
	I0311 21:26:32.855260   69446 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:26:32.855285   69446 stop.go:39] StopHost: default-k8s-diff-port-766430
	I0311 21:26:32.855689   69446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:26:32.855733   69446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:26:32.870539   69446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41159
	I0311 21:26:32.871131   69446 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:26:32.871867   69446 main.go:141] libmachine: Using API Version  1
	I0311 21:26:32.871889   69446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:26:32.872287   69446 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:26:32.874768   69446 out.go:177] * Stopping node "default-k8s-diff-port-766430"  ...
	I0311 21:26:32.876540   69446 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0311 21:26:32.876571   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:26:32.876852   69446 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0311 21:26:32.876909   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:26:32.880189   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:26:32.880643   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:25:39 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:26:32.880672   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:26:32.880884   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:26:32.881084   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:26:32.881274   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:26:32.881441   69446 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:26:33.007559   69446 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0311 21:26:33.075379   69446 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0311 21:26:33.153344   69446 main.go:141] libmachine: Stopping "default-k8s-diff-port-766430"...
	I0311 21:26:33.153388   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:26:33.155034   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Stop
	I0311 21:26:33.161299   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 0/120
	I0311 21:26:34.163173   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 1/120
	I0311 21:26:35.164429   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 2/120
	I0311 21:26:36.165842   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 3/120
	I0311 21:26:37.167281   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 4/120
	I0311 21:26:38.169405   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 5/120
	I0311 21:26:39.171262   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 6/120
	I0311 21:26:40.172555   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 7/120
	I0311 21:26:41.173891   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 8/120
	I0311 21:26:42.175269   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 9/120
	I0311 21:26:43.177747   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 10/120
	I0311 21:26:44.179276   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 11/120
	I0311 21:26:45.180701   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 12/120
	I0311 21:26:46.182079   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 13/120
	I0311 21:26:47.183432   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 14/120
	I0311 21:26:48.185181   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 15/120
	I0311 21:26:49.186475   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 16/120
	I0311 21:26:50.188041   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 17/120
	I0311 21:26:51.189431   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 18/120
	I0311 21:26:52.191160   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 19/120
	I0311 21:26:53.193271   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 20/120
	I0311 21:26:54.194622   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 21/120
	I0311 21:26:55.196019   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 22/120
	I0311 21:26:56.197353   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 23/120
	I0311 21:26:57.198678   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 24/120
	I0311 21:26:58.200576   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 25/120
	I0311 21:26:59.201824   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 26/120
	I0311 21:27:00.203176   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 27/120
	I0311 21:27:01.204552   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 28/120
	I0311 21:27:02.205966   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 29/120
	I0311 21:27:03.208003   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 30/120
	I0311 21:27:04.209349   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 31/120
	I0311 21:27:05.210702   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 32/120
	I0311 21:27:06.212240   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 33/120
	I0311 21:27:07.213892   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 34/120
	I0311 21:27:08.215474   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 35/120
	I0311 21:27:09.216962   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 36/120
	I0311 21:27:10.218580   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 37/120
	I0311 21:27:11.219871   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 38/120
	I0311 21:27:12.221425   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 39/120
	I0311 21:27:13.223542   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 40/120
	I0311 21:27:14.224933   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 41/120
	I0311 21:27:15.226454   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 42/120
	I0311 21:27:16.228359   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 43/120
	I0311 21:27:17.229821   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 44/120
	I0311 21:27:18.231681   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 45/120
	I0311 21:27:19.232975   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 46/120
	I0311 21:27:20.234203   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 47/120
	I0311 21:27:21.235547   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 48/120
	I0311 21:27:22.236680   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 49/120
	I0311 21:27:23.238693   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 50/120
	I0311 21:27:24.240086   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 51/120
	I0311 21:27:25.241221   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 52/120
	I0311 21:27:26.243235   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 53/120
	I0311 21:27:27.244507   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 54/120
	I0311 21:27:28.246682   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 55/120
	I0311 21:27:29.247916   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 56/120
	I0311 21:27:30.249617   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 57/120
	I0311 21:27:31.251171   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 58/120
	I0311 21:27:32.252692   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 59/120
	I0311 21:27:33.255276   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 60/120
	I0311 21:27:34.256685   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 61/120
	I0311 21:27:35.257980   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 62/120
	I0311 21:27:36.259489   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 63/120
	I0311 21:27:37.260885   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 64/120
	I0311 21:27:38.262987   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 65/120
	I0311 21:27:39.264495   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 66/120
	I0311 21:27:40.266189   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 67/120
	I0311 21:27:41.267637   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 68/120
	I0311 21:27:42.268999   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 69/120
	I0311 21:27:43.270977   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 70/120
	I0311 21:27:44.272980   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 71/120
	I0311 21:27:45.274556   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 72/120
	I0311 21:27:46.276186   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 73/120
	I0311 21:27:47.277412   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 74/120
	I0311 21:27:48.279094   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 75/120
	I0311 21:27:49.280470   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 76/120
	I0311 21:27:50.281879   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 77/120
	I0311 21:27:51.283301   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 78/120
	I0311 21:27:52.284803   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 79/120
	I0311 21:27:53.287106   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 80/120
	I0311 21:27:54.288537   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 81/120
	I0311 21:27:55.289931   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 82/120
	I0311 21:27:56.291389   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 83/120
	I0311 21:27:57.292895   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 84/120
	I0311 21:27:58.295075   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 85/120
	I0311 21:27:59.296588   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 86/120
	I0311 21:28:00.298033   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 87/120
	I0311 21:28:01.299368   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 88/120
	I0311 21:28:02.300951   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 89/120
	I0311 21:28:03.303196   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 90/120
	I0311 21:28:04.304614   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 91/120
	I0311 21:28:05.306106   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 92/120
	I0311 21:28:06.307559   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 93/120
	I0311 21:28:07.308935   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 94/120
	I0311 21:28:08.310719   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 95/120
	I0311 21:28:09.312157   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 96/120
	I0311 21:28:10.313637   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 97/120
	I0311 21:28:11.314911   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 98/120
	I0311 21:28:12.316504   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 99/120
	I0311 21:28:13.318669   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 100/120
	I0311 21:28:14.319975   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 101/120
	I0311 21:28:15.321513   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 102/120
	I0311 21:28:16.323174   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 103/120
	I0311 21:28:17.324267   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 104/120
	I0311 21:28:18.326206   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 105/120
	I0311 21:28:19.327659   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 106/120
	I0311 21:28:20.329080   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 107/120
	I0311 21:28:21.330374   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 108/120
	I0311 21:28:22.331739   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 109/120
	I0311 21:28:23.333872   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 110/120
	I0311 21:28:24.335274   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 111/120
	I0311 21:28:25.336485   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 112/120
	I0311 21:28:26.337982   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 113/120
	I0311 21:28:27.339162   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 114/120
	I0311 21:28:28.341423   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 115/120
	I0311 21:28:29.342888   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 116/120
	I0311 21:28:30.344544   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 117/120
	I0311 21:28:31.346000   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 118/120
	I0311 21:28:32.347511   69446 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for machine to stop 119/120
	I0311 21:28:33.348676   69446 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0311 21:28:33.348752   69446 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0311 21:28:33.350587   69446 out.go:177] 
	W0311 21:28:33.351936   69446 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0311 21:28:33.351956   69446 out.go:239] * 
	* 
	W0311 21:28:33.355359   69446 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:28:33.356901   69446 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-766430 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430: exit status 3 (18.586494455s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:28:51.945077   70104 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host
	E0311 21:28:51.945097   70104 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-766430" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-324578 --alsologtostderr -v=3
E0311 21:26:34.157984   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-324578 --alsologtostderr -v=3: exit status 82 (2m0.522068062s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-324578"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 21:26:34.018142   69504 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:26:34.018244   69504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:26:34.018249   69504 out.go:304] Setting ErrFile to fd 2...
	I0311 21:26:34.018253   69504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:26:34.018429   69504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:26:34.018664   69504 out.go:298] Setting JSON to false
	I0311 21:26:34.018731   69504 mustload.go:65] Loading cluster: no-preload-324578
	I0311 21:26:34.019029   69504 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:26:34.019089   69504 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/config.json ...
	I0311 21:26:34.019256   69504 mustload.go:65] Loading cluster: no-preload-324578
	I0311 21:26:34.019358   69504 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:26:34.019392   69504 stop.go:39] StopHost: no-preload-324578
	I0311 21:26:34.019746   69504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:26:34.019797   69504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:26:34.035256   69504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34575
	I0311 21:26:34.035727   69504 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:26:34.036327   69504 main.go:141] libmachine: Using API Version  1
	I0311 21:26:34.036371   69504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:26:34.036764   69504 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:26:34.039395   69504 out.go:177] * Stopping node "no-preload-324578"  ...
	I0311 21:26:34.040755   69504 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0311 21:26:34.040793   69504 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:26:34.041012   69504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0311 21:26:34.041051   69504 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:26:34.043865   69504 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:26:34.044227   69504 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:24:39 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:26:34.044255   69504 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:26:34.044394   69504 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:26:34.044551   69504 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:26:34.044723   69504 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:26:34.044877   69504 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:26:34.143661   69504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0311 21:26:34.207564   69504 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0311 21:26:34.278043   69504 main.go:141] libmachine: Stopping "no-preload-324578"...
	I0311 21:26:34.278088   69504 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:26:34.279810   69504 main.go:141] libmachine: (no-preload-324578) Calling .Stop
	I0311 21:26:34.283815   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 0/120
	I0311 21:26:35.285986   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 1/120
	I0311 21:26:36.287208   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 2/120
	I0311 21:26:37.288460   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 3/120
	I0311 21:26:38.289783   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 4/120
	I0311 21:26:39.291691   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 5/120
	I0311 21:26:40.293058   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 6/120
	I0311 21:26:41.295296   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 7/120
	I0311 21:26:42.296693   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 8/120
	I0311 21:26:43.298148   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 9/120
	I0311 21:26:44.299697   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 10/120
	I0311 21:26:45.301052   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 11/120
	I0311 21:26:46.303344   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 12/120
	I0311 21:26:47.304871   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 13/120
	I0311 21:26:48.306220   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 14/120
	I0311 21:26:49.308290   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 15/120
	I0311 21:26:50.309671   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 16/120
	I0311 21:26:51.310952   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 17/120
	I0311 21:26:52.312326   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 18/120
	I0311 21:26:53.313563   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 19/120
	I0311 21:26:54.315610   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 20/120
	I0311 21:26:55.316930   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 21/120
	I0311 21:26:56.319142   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 22/120
	I0311 21:26:57.320437   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 23/120
	I0311 21:26:58.321918   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 24/120
	I0311 21:26:59.323750   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 25/120
	I0311 21:27:00.325322   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 26/120
	I0311 21:27:01.327101   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 27/120
	I0311 21:27:02.328594   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 28/120
	I0311 21:27:03.330089   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 29/120
	I0311 21:27:04.332298   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 30/120
	I0311 21:27:05.333693   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 31/120
	I0311 21:27:06.335269   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 32/120
	I0311 21:27:07.336711   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 33/120
	I0311 21:27:08.338216   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 34/120
	I0311 21:27:09.340278   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 35/120
	I0311 21:27:10.341669   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 36/120
	I0311 21:27:11.343013   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 37/120
	I0311 21:27:12.345254   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 38/120
	I0311 21:27:13.346552   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 39/120
	I0311 21:27:14.348898   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 40/120
	I0311 21:27:15.350549   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 41/120
	I0311 21:27:16.351799   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 42/120
	I0311 21:27:17.353204   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 43/120
	I0311 21:27:18.354505   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 44/120
	I0311 21:27:19.356530   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 45/120
	I0311 21:27:20.358004   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 46/120
	I0311 21:27:21.359139   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 47/120
	I0311 21:27:22.360474   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 48/120
	I0311 21:27:23.361879   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 49/120
	I0311 21:27:24.364291   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 50/120
	I0311 21:27:25.365528   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 51/120
	I0311 21:27:26.367091   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 52/120
	I0311 21:27:27.368574   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 53/120
	I0311 21:27:28.370013   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 54/120
	I0311 21:27:29.372149   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 55/120
	I0311 21:27:30.373836   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 56/120
	I0311 21:27:31.375322   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 57/120
	I0311 21:27:32.376783   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 58/120
	I0311 21:27:33.378264   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 59/120
	I0311 21:27:34.380343   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 60/120
	I0311 21:27:35.381843   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 61/120
	I0311 21:27:36.383137   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 62/120
	I0311 21:27:37.384448   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 63/120
	I0311 21:27:38.385697   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 64/120
	I0311 21:27:39.387586   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 65/120
	I0311 21:27:40.389239   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 66/120
	I0311 21:27:41.390585   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 67/120
	I0311 21:27:42.392222   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 68/120
	I0311 21:27:43.393884   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 69/120
	I0311 21:27:44.395905   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 70/120
	I0311 21:27:45.397235   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 71/120
	I0311 21:27:46.398666   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 72/120
	I0311 21:27:47.399929   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 73/120
	I0311 21:27:48.401330   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 74/120
	I0311 21:27:49.403318   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 75/120
	I0311 21:27:50.404863   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 76/120
	I0311 21:27:51.406341   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 77/120
	I0311 21:27:52.407975   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 78/120
	I0311 21:27:53.409351   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 79/120
	I0311 21:27:54.411556   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 80/120
	I0311 21:27:55.413059   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 81/120
	I0311 21:27:56.414547   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 82/120
	I0311 21:27:57.416405   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 83/120
	I0311 21:27:58.417922   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 84/120
	I0311 21:27:59.420101   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 85/120
	I0311 21:28:00.421617   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 86/120
	I0311 21:28:01.423052   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 87/120
	I0311 21:28:02.424514   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 88/120
	I0311 21:28:03.426012   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 89/120
	I0311 21:28:04.428154   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 90/120
	I0311 21:28:05.429620   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 91/120
	I0311 21:28:06.430999   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 92/120
	I0311 21:28:07.432280   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 93/120
	I0311 21:28:08.433458   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 94/120
	I0311 21:28:09.435187   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 95/120
	I0311 21:28:10.436623   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 96/120
	I0311 21:28:11.437804   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 97/120
	I0311 21:28:12.439192   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 98/120
	I0311 21:28:13.440486   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 99/120
	I0311 21:28:14.442462   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 100/120
	I0311 21:28:15.443756   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 101/120
	I0311 21:28:16.445007   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 102/120
	I0311 21:28:17.447201   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 103/120
	I0311 21:28:18.448631   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 104/120
	I0311 21:28:19.450478   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 105/120
	I0311 21:28:20.451787   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 106/120
	I0311 21:28:21.453374   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 107/120
	I0311 21:28:22.454781   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 108/120
	I0311 21:28:23.456374   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 109/120
	I0311 21:28:24.458508   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 110/120
	I0311 21:28:25.459871   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 111/120
	I0311 21:28:26.461069   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 112/120
	I0311 21:28:27.462852   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 113/120
	I0311 21:28:28.464196   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 114/120
	I0311 21:28:29.466277   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 115/120
	I0311 21:28:30.467780   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 116/120
	I0311 21:28:31.469017   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 117/120
	I0311 21:28:32.470288   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 118/120
	I0311 21:28:33.472054   69504 main.go:141] libmachine: (no-preload-324578) Waiting for machine to stop 119/120
	I0311 21:28:34.472699   69504 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0311 21:28:34.472762   69504 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0311 21:28:34.474618   69504 out.go:177] 
	W0311 21:28:34.475858   69504 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0311 21:28:34.475873   69504 out.go:239] * 
	* 
	W0311 21:28:34.478943   69504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:28:34.480261   69504 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-324578 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578
E0311 21:28:36.388651   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578: exit status 3 (18.484602162s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:28:52.968965   70134 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0311 21:28:52.968985   70134 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-324578" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-743937 --alsologtostderr -v=3
E0311 21:26:49.163018   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:58.807837   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 21:27:04.880159   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:27:09.643876   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:27:37.144325   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:37.149655   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:37.159935   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:37.180208   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:37.220489   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:37.301166   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:37.461794   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:37.782351   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:38.423014   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:38.935110   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 21:27:39.704182   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:42.264430   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:45.841155   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:27:47.384816   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:50.604339   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:27:55.427340   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:55.432598   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:55.442852   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:55.463106   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:55.503441   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:55.583815   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:55.744250   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:56.064696   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:56.705043   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:27:57.625737   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:27:57.985226   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:28:00.546111   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:28:05.667214   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:28:15.907437   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-743937 --alsologtostderr -v=3: exit status 82 (2m0.514456167s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-743937"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 21:26:47.624307   69614 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:26:47.624430   69614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:26:47.624440   69614 out.go:304] Setting ErrFile to fd 2...
	I0311 21:26:47.624445   69614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:26:47.624654   69614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:26:47.624891   69614 out.go:298] Setting JSON to false
	I0311 21:26:47.624959   69614 mustload.go:65] Loading cluster: embed-certs-743937
	I0311 21:26:47.625251   69614 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:26:47.625309   69614 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/config.json ...
	I0311 21:26:47.625465   69614 mustload.go:65] Loading cluster: embed-certs-743937
	I0311 21:26:47.625557   69614 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:26:47.625588   69614 stop.go:39] StopHost: embed-certs-743937
	I0311 21:26:47.625950   69614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:26:47.625992   69614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:26:47.640119   69614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0311 21:26:47.640512   69614 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:26:47.640991   69614 main.go:141] libmachine: Using API Version  1
	I0311 21:26:47.641016   69614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:26:47.641369   69614 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:26:47.644050   69614 out.go:177] * Stopping node "embed-certs-743937"  ...
	I0311 21:26:47.645311   69614 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0311 21:26:47.645337   69614 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:26:47.645573   69614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0311 21:26:47.645599   69614 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:26:47.648191   69614 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:26:47.648561   69614 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:25:07 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:26:47.648594   69614 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:26:47.648782   69614 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:26:47.648958   69614 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:26:47.649157   69614 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:26:47.649320   69614 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:26:47.776402   69614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0311 21:26:47.811491   69614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0311 21:26:47.884847   69614 main.go:141] libmachine: Stopping "embed-certs-743937"...
	I0311 21:26:47.884903   69614 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:26:47.886280   69614 main.go:141] libmachine: (embed-certs-743937) Calling .Stop
	I0311 21:26:47.889845   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 0/120
	I0311 21:26:48.891334   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 1/120
	I0311 21:26:49.892694   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 2/120
	I0311 21:26:50.894125   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 3/120
	I0311 21:26:51.895617   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 4/120
	I0311 21:26:52.897574   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 5/120
	I0311 21:26:53.898909   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 6/120
	I0311 21:26:54.900423   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 7/120
	I0311 21:26:55.901918   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 8/120
	I0311 21:26:56.903350   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 9/120
	I0311 21:26:57.905349   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 10/120
	I0311 21:26:58.906644   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 11/120
	I0311 21:26:59.908060   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 12/120
	I0311 21:27:00.909434   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 13/120
	I0311 21:27:01.910950   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 14/120
	I0311 21:27:02.912977   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 15/120
	I0311 21:27:03.914388   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 16/120
	I0311 21:27:04.915746   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 17/120
	I0311 21:27:05.917201   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 18/120
	I0311 21:27:06.918714   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 19/120
	I0311 21:27:07.920922   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 20/120
	I0311 21:27:08.922487   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 21/120
	I0311 21:27:09.923985   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 22/120
	I0311 21:27:10.925424   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 23/120
	I0311 21:27:11.926868   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 24/120
	I0311 21:27:12.928888   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 25/120
	I0311 21:27:13.931120   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 26/120
	I0311 21:27:14.932472   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 27/120
	I0311 21:27:15.933861   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 28/120
	I0311 21:27:16.935176   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 29/120
	I0311 21:27:17.937317   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 30/120
	I0311 21:27:18.939240   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 31/120
	I0311 21:27:19.940604   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 32/120
	I0311 21:27:20.941954   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 33/120
	I0311 21:27:21.943243   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 34/120
	I0311 21:27:22.945382   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 35/120
	I0311 21:27:23.946604   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 36/120
	I0311 21:27:24.947959   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 37/120
	I0311 21:27:25.949255   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 38/120
	I0311 21:27:26.950789   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 39/120
	I0311 21:27:27.953142   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 40/120
	I0311 21:27:28.955405   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 41/120
	I0311 21:27:29.957078   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 42/120
	I0311 21:27:30.958634   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 43/120
	I0311 21:27:31.960231   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 44/120
	I0311 21:27:32.962449   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 45/120
	I0311 21:27:33.963665   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 46/120
	I0311 21:27:34.965122   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 47/120
	I0311 21:27:35.966458   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 48/120
	I0311 21:27:36.967796   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 49/120
	I0311 21:27:37.970008   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 50/120
	I0311 21:27:38.971111   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 51/120
	I0311 21:27:39.972467   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 52/120
	I0311 21:27:40.973593   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 53/120
	I0311 21:27:41.974937   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 54/120
	I0311 21:27:42.976765   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 55/120
	I0311 21:27:43.978075   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 56/120
	I0311 21:27:44.979528   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 57/120
	I0311 21:27:45.980880   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 58/120
	I0311 21:27:46.982189   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 59/120
	I0311 21:27:47.984108   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 60/120
	I0311 21:27:48.985474   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 61/120
	I0311 21:27:49.986913   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 62/120
	I0311 21:27:50.988197   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 63/120
	I0311 21:27:51.989656   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 64/120
	I0311 21:27:52.991777   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 65/120
	I0311 21:27:53.993071   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 66/120
	I0311 21:27:54.994702   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 67/120
	I0311 21:27:55.996360   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 68/120
	I0311 21:27:56.997747   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 69/120
	I0311 21:27:57.999852   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 70/120
	I0311 21:27:59.001251   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 71/120
	I0311 21:28:00.002724   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 72/120
	I0311 21:28:01.004100   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 73/120
	I0311 21:28:02.005672   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 74/120
	I0311 21:28:03.007958   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 75/120
	I0311 21:28:04.009342   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 76/120
	I0311 21:28:05.011543   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 77/120
	I0311 21:28:06.012941   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 78/120
	I0311 21:28:07.014344   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 79/120
	I0311 21:28:08.016538   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 80/120
	I0311 21:28:09.017778   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 81/120
	I0311 21:28:10.019147   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 82/120
	I0311 21:28:11.020353   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 83/120
	I0311 21:28:12.021793   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 84/120
	I0311 21:28:13.023669   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 85/120
	I0311 21:28:14.024936   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 86/120
	I0311 21:28:15.026181   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 87/120
	I0311 21:28:16.027334   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 88/120
	I0311 21:28:17.028409   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 89/120
	I0311 21:28:18.030341   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 90/120
	I0311 21:28:19.031682   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 91/120
	I0311 21:28:20.033366   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 92/120
	I0311 21:28:21.034844   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 93/120
	I0311 21:28:22.036383   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 94/120
	I0311 21:28:23.038436   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 95/120
	I0311 21:28:24.039966   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 96/120
	I0311 21:28:25.041411   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 97/120
	I0311 21:28:26.042719   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 98/120
	I0311 21:28:27.044078   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 99/120
	I0311 21:28:28.046181   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 100/120
	I0311 21:28:29.047725   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 101/120
	I0311 21:28:30.049033   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 102/120
	I0311 21:28:31.050431   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 103/120
	I0311 21:28:32.051771   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 104/120
	I0311 21:28:33.053754   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 105/120
	I0311 21:28:34.055053   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 106/120
	I0311 21:28:35.056428   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 107/120
	I0311 21:28:36.057863   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 108/120
	I0311 21:28:37.059288   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 109/120
	I0311 21:28:38.061608   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 110/120
	I0311 21:28:39.063162   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 111/120
	I0311 21:28:40.064632   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 112/120
	I0311 21:28:41.066144   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 113/120
	I0311 21:28:42.067837   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 114/120
	I0311 21:28:43.070069   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 115/120
	I0311 21:28:44.071364   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 116/120
	I0311 21:28:45.072808   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 117/120
	I0311 21:28:46.074170   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 118/120
	I0311 21:28:47.075924   69614 main.go:141] libmachine: (embed-certs-743937) Waiting for machine to stop 119/120
	I0311 21:28:48.076455   69614 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0311 21:28:48.076504   69614 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0311 21:28:48.078684   69614 out.go:177] 
	W0311 21:28:48.080260   69614 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0311 21:28:48.080278   69614 out.go:239] * 
	* 
	W0311 21:28:48.083410   69614 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:28:48.084771   69614 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-743937 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937
E0311 21:28:51.608617   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:51.613861   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:51.624090   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:51.644323   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:51.644387   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:51.650718   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:51.660939   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:51.681150   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:51.685325   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:51.721527   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:51.765713   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:51.801909   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:51.926328   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937: exit status 3 (18.451031193s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:29:06.537052   70197 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0311 21:29:06.537072   70197 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-743937" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-239315 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-239315 create -f testdata/busybox.yaml: exit status 1 (45.171143ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-239315" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-239315 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 6 (223.051531ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:28:17.414822   69968 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-239315" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 6 (221.461587ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:28:17.637011   69998 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-239315" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-239315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0311 21:28:18.106672   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-239315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.567517786s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-239315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-239315 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-239315 describe deploy/metrics-server -n kube-system: exit status 1 (42.866777ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-239315" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-239315 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 6 (230.831961ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:29:57.477013   70780 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-239315" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
E0311 21:28:51.962547   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:52.247040   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:52.283305   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:52.888161   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:52.924415   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430: exit status 3 (3.167993344s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:28:55.113084   70227 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host
	E0311 21:28:55.113103   70227 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-766430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-766430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154098462s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-766430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
E0311 21:29:01.850362   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:29:01.885496   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:29:01.983731   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430: exit status 3 (3.061595717s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:29:04.329176   70346 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host
	E0311 21:29:04.329196   70346 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.11:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-766430" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578
E0311 21:28:54.169036   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:54.205212   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578: exit status 3 (3.168068531s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:28:56.137124   70268 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0311 21:28:56.137145   70268 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-324578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0311 21:28:56.729239   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:28:56.765338   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:28:59.067719   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-324578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153933472s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-324578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578: exit status 3 (3.062034797s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:29:05.353197   70376 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E0311 21:29:05.353218   70376 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-324578" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937
E0311 21:29:07.761799   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937: exit status 3 (3.167888703s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:29:09.705115   70492 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0311 21:29:09.705163   70492 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-743937 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0311 21:29:12.090662   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:29:12.125919   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:29:12.525507   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-743937 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154199299s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-743937 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937
E0311 21:29:17.349316   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937: exit status 3 (3.061674601s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 21:29:18.921111   70563 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0311 21:29:18.921139   70563 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-743937" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (776.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-239315 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0311 21:30:01.419113   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:30:11.660004   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:30:13.531394   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:30:13.566503   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:30:20.989422   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:30:32.141029   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:30:39.269899   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:31:13.101943   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:31:23.916275   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:31:28.681336   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:31:35.451621   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:31:35.486797   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:31:51.602051   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:31:56.365817   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:31:58.808066   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 21:32:35.022578   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:32:37.144734   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:32:38.935615   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 21:32:55.427270   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:33:04.830320   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:33:21.854641   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 21:33:23.111014   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:33:51.608712   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:33:51.643894   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:34:19.292874   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:34:19.326944   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:34:51.177830   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:35:18.862726   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:36:23.916064   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:36:28.681144   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:36:58.807782   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 21:37:37.145185   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:37:38.935615   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 21:37:55.427150   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
E0311 21:38:51.608725   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:38:51.643978   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-239315 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m52.765669851s)

                                                
                                                
-- stdout --
	* [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-239315" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 21:30:01.044166   70908 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:30:01.044254   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044259   70908 out.go:304] Setting ErrFile to fd 2...
	I0311 21:30:01.044263   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044451   70908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:30:01.044970   70908 out.go:298] Setting JSON to false
	I0311 21:30:01.045838   70908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7950,"bootTime":1710184651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:30:01.045894   70908 start.go:139] virtualization: kvm guest
	I0311 21:30:01.048311   70908 out.go:177] * [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:30:01.050003   70908 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:30:01.050011   70908 notify.go:220] Checking for updates...
	I0311 21:30:01.051498   70908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:30:01.052999   70908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:30:01.054439   70908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:30:01.055768   70908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:30:01.057137   70908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:30:01.058760   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:30:01.059167   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.059205   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.073734   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0311 21:30:01.074087   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.074586   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.074618   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.074966   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.075173   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.077005   70908 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 21:30:01.078583   70908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:30:01.078879   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.078914   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.093226   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0311 21:30:01.093614   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.094174   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.094243   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.094616   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.094805   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.128302   70908 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:30:01.129965   70908 start.go:297] selected driver: kvm2
	I0311 21:30:01.129991   70908 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.130113   70908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:30:01.131050   70908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.131115   70908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:30:01.145452   70908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:30:01.145782   70908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:30:01.145811   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:30:01.145819   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:30:01.145863   70908 start.go:340] cluster config:
	{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.145954   70908 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.147725   70908 out.go:177] * Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	I0311 21:30:01.148916   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:30:01.148943   70908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:30:01.148955   70908 cache.go:56] Caching tarball of preloaded images
	I0311 21:30:01.149022   70908 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:30:01.149032   70908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:30:01.149114   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:30:01.149263   70908 start.go:360] acquireMachinesLock for old-k8s-version-239315: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:34:23.543571   70908 start.go:364] duration metric: took 4m22.394278247s to acquireMachinesLock for "old-k8s-version-239315"
	I0311 21:34:23.543649   70908 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:23.543661   70908 fix.go:54] fixHost starting: 
	I0311 21:34:23.544084   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:23.544139   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:23.561669   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0311 21:34:23.562158   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:23.562618   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:34:23.562645   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:23.562949   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:23.563114   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:23.563306   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetState
	I0311 21:34:23.565152   70908 fix.go:112] recreateIfNeeded on old-k8s-version-239315: state=Stopped err=<nil>
	I0311 21:34:23.565178   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	W0311 21:34:23.565351   70908 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:23.567943   70908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239315" ...
	I0311 21:34:23.569356   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .Start
	I0311 21:34:23.569527   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring networks are active...
	I0311 21:34:23.570188   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network default is active
	I0311 21:34:23.570613   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network mk-old-k8s-version-239315 is active
	I0311 21:34:23.571070   70908 main.go:141] libmachine: (old-k8s-version-239315) Getting domain xml...
	I0311 21:34:23.571836   70908 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:34:24.895619   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting to get IP...
	I0311 21:34:24.896680   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:24.897160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:24.897218   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:24.897131   71714 retry.go:31] will retry after 268.563191ms: waiting for machine to come up
	I0311 21:34:25.167783   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.168312   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.168343   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.168268   71714 retry.go:31] will retry after 245.059124ms: waiting for machine to come up
	I0311 21:34:25.414644   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.415139   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.415168   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.415100   71714 retry.go:31] will retry after 407.807793ms: waiting for machine to come up
	I0311 21:34:25.824887   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.825351   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.825379   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.825274   71714 retry.go:31] will retry after 503.187834ms: waiting for machine to come up
	I0311 21:34:26.330005   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:26.330547   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:26.330569   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:26.330464   71714 retry.go:31] will retry after 723.914956ms: waiting for machine to come up
	I0311 21:34:27.056271   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.056879   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.056901   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.056834   71714 retry.go:31] will retry after 693.583075ms: waiting for machine to come up
	I0311 21:34:27.752514   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.752958   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.752980   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.752916   71714 retry.go:31] will retry after 902.247864ms: waiting for machine to come up
	I0311 21:34:28.657551   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:28.658023   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:28.658079   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:28.658008   71714 retry.go:31] will retry after 1.140425887s: waiting for machine to come up
	I0311 21:34:29.800305   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:29.800824   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:29.800852   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:29.800774   71714 retry.go:31] will retry after 1.68593342s: waiting for machine to come up
	I0311 21:34:31.488010   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:31.488449   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:31.488471   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:31.488421   71714 retry.go:31] will retry after 2.325869089s: waiting for machine to come up
	I0311 21:34:33.815568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:33.816215   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:33.816236   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:33.816176   71714 retry.go:31] will retry after 2.457084002s: waiting for machine to come up
	I0311 21:34:36.274640   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:36.275119   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:36.275157   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:36.275064   71714 retry.go:31] will retry after 3.618026102s: waiting for machine to come up
	I0311 21:34:39.894877   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:39.895397   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:39.895447   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:39.895343   71714 retry.go:31] will retry after 3.826847061s: waiting for machine to come up
	I0311 21:34:43.723851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724335   70908 main.go:141] libmachine: (old-k8s-version-239315) Found IP for machine: 192.168.72.52
	I0311 21:34:43.724367   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserving static IP address...
	I0311 21:34:43.724382   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has current primary IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724722   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.724759   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserved static IP address: 192.168.72.52
	I0311 21:34:43.724774   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | skip adding static IP to network mk-old-k8s-version-239315 - found existing host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"}
	I0311 21:34:43.724797   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Getting to WaitForSSH function...
	I0311 21:34:43.724815   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting for SSH to be available...
	I0311 21:34:43.727015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727330   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.727354   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727541   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH client type: external
	I0311 21:34:43.727568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa (-rw-------)
	I0311 21:34:43.727624   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:43.727641   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | About to run SSH command:
	I0311 21:34:43.727651   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | exit 0
	I0311 21:34:43.848884   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:43.849287   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:34:43.850084   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:43.852942   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853529   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.853572   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853801   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:34:43.854001   70908 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:43.854024   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:43.854255   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.856623   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857153   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.857187   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857321   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.857516   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857702   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857897   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.858105   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.858332   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.858349   70908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:43.961617   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:43.961664   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.961921   70908 buildroot.go:166] provisioning hostname "old-k8s-version-239315"
	I0311 21:34:43.961945   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.962134   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.964672   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.964987   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.965015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.965122   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.965305   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965466   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965591   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.965801   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.966042   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.966055   70908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239315 && echo "old-k8s-version-239315" | sudo tee /etc/hostname
	I0311 21:34:44.088097   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239315
	
	I0311 21:34:44.088126   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.090911   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091167   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.091205   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091347   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.091524   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091680   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091818   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.091984   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.092185   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.092205   70908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239315/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:44.207643   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:44.207674   70908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:44.207693   70908 buildroot.go:174] setting up certificates
	I0311 21:34:44.207701   70908 provision.go:84] configureAuth start
	I0311 21:34:44.207710   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:44.207975   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:44.211160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211556   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.211588   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211754   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.214211   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214553   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.214585   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214732   70908 provision.go:143] copyHostCerts
	I0311 21:34:44.214797   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:44.214813   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:44.214886   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:44.214991   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:44.215005   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:44.215035   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:44.215160   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:44.215171   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:44.215198   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:44.215267   70908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239315 san=[127.0.0.1 192.168.72.52 localhost minikube old-k8s-version-239315]
	I0311 21:34:44.305250   70908 provision.go:177] copyRemoteCerts
	I0311 21:34:44.305329   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:44.305367   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.308244   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308636   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.308673   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308874   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.309092   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.309290   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.309446   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.394958   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:44.423314   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 21:34:44.459338   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:44.491201   70908 provision.go:87] duration metric: took 283.487383ms to configureAuth
	I0311 21:34:44.491232   70908 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:44.491419   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:34:44.491484   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.494039   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494476   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.494509   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494638   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.494830   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.494998   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.495175   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.495366   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.495548   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.495570   70908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:44.787935   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:44.787961   70908 machine.go:97] duration metric: took 933.945971ms to provisionDockerMachine
	I0311 21:34:44.787971   70908 start.go:293] postStartSetup for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:34:44.787983   70908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:44.788007   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:44.788327   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:44.788355   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.791133   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791460   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.791492   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791637   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.791858   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.792021   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.792165   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.877163   70908 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:44.882141   70908 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:44.882164   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:44.882241   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:44.882330   70908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:44.882442   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:44.894699   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:44.919809   70908 start.go:296] duration metric: took 131.8264ms for postStartSetup
	I0311 21:34:44.919848   70908 fix.go:56] duration metric: took 21.376188092s for fixHost
	I0311 21:34:44.919867   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.922414   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922708   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.922738   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922876   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.923075   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923274   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923455   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.923618   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.923806   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.923831   70908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0311 21:34:45.026068   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192885.004450463
	
	I0311 21:34:45.026088   70908 fix.go:216] guest clock: 1710192885.004450463
	I0311 21:34:45.026096   70908 fix.go:229] Guest: 2024-03-11 21:34:45.004450463 +0000 UTC Remote: 2024-03-11 21:34:44.919851167 +0000 UTC m=+283.922086595 (delta=84.599296ms)
	I0311 21:34:45.026118   70908 fix.go:200] guest clock delta is within tolerance: 84.599296ms
	I0311 21:34:45.026124   70908 start.go:83] releasing machines lock for "old-k8s-version-239315", held for 21.482500591s
	I0311 21:34:45.026158   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.026440   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:45.029366   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029778   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.029813   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029992   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030514   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030711   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030800   70908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:45.030846   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.030946   70908 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:45.030971   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.033851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.033989   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034264   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034292   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034324   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034348   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034429   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034618   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034633   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034799   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034814   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034979   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034977   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.035143   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.135748   70908 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:45.142408   70908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:45.297445   70908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:45.304482   70908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:45.304552   70908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:45.322754   70908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:45.322775   70908 start.go:494] detecting cgroup driver to use...
	I0311 21:34:45.322832   70908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:45.345988   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:45.363267   70908 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:45.363320   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:45.380892   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:45.396972   70908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:45.531640   70908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:45.700243   70908 docker.go:233] disabling docker service ...
	I0311 21:34:45.700306   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:45.730542   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:45.749068   70908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:45.903721   70908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:46.045122   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:46.065278   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:46.090726   70908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:34:46.090779   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.105783   70908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:46.105841   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.121702   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.136262   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.150628   70908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:46.163771   70908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:46.175613   70908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:46.175675   70908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:46.193848   70908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:46.205694   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:46.344832   70908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:46.501773   70908 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:46.501851   70908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:46.507932   70908 start.go:562] Will wait 60s for crictl version
	I0311 21:34:46.507988   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:46.512337   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:46.555165   70908 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:46.555249   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.588554   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.623785   70908 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:34:46.625154   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:46.627732   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628080   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:46.628102   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628304   70908 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:46.633367   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:46.649537   70908 kubeadm.go:877] updating cluster {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:46.649677   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:34:46.649733   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:46.699194   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:46.699264   70908 ssh_runner.go:195] Run: which lz4
	I0311 21:34:46.703944   70908 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0311 21:34:46.709224   70908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:46.709258   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:34:48.747926   70908 crio.go:444] duration metric: took 2.044006932s to copy over tarball
	I0311 21:34:48.747994   70908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:52.300295   70908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55227284s)
	I0311 21:34:52.300322   70908 crio.go:451] duration metric: took 3.552370125s to extract the tarball
	I0311 21:34:52.300331   70908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:52.349405   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:52.395791   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:52.395821   70908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:52.395892   70908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.395955   70908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.396002   70908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.396010   70908 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.395959   70908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.395932   70908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.395921   70908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:34:52.395974   70908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397721   70908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.397760   70908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.397767   70908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.397768   70908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.397762   70908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397804   70908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.398008   70908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:34:52.398129   70908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.548255   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.549300   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.560293   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.564094   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.564433   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.569516   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.578251   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:34:52.674385   70908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:34:52.674427   70908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.674475   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.725602   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.741797   70908 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:34:52.741840   70908 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.741882   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.793195   70908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:34:52.793239   70908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.793278   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798118   70908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:34:52.798174   70908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.798220   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798241   70908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:34:52.798277   70908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.798312   70908 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:34:52.798333   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798285   70908 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:34:52.798378   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.798399   70908 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.798434   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798336   70908 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:34:52.798510   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.957658   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.957712   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.957765   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.957816   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.957846   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:34:52.957904   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:34:52.957925   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:53.106649   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:34:53.106699   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:34:53.106913   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:34:53.107837   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:34:53.116024   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:34:53.122060   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:34:53.122118   70908 cache_images.go:92] duration metric: took 726.282306ms to LoadCachedImages
	W0311 21:34:53.122205   70908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0311 21:34:53.122224   70908 kubeadm.go:928] updating node { 192.168.72.52 8443 v1.20.0 crio true true} ...
	I0311 21:34:53.122341   70908 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239315 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:53.122443   70908 ssh_runner.go:195] Run: crio config
	I0311 21:34:53.192161   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:34:53.192191   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:53.192211   70908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:53.192233   70908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239315 NodeName:old-k8s-version-239315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:34:53.192405   70908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239315"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:53.192476   70908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:34:53.203965   70908 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:53.204019   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:53.215221   70908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0311 21:34:53.235943   70908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:53.255383   70908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0311 21:34:53.276634   70908 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:53.281778   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:53.298479   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:53.450052   70908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:53.472459   70908 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315 for IP: 192.168.72.52
	I0311 21:34:53.472480   70908 certs.go:194] generating shared ca certs ...
	I0311 21:34:53.472524   70908 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:53.472676   70908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:53.472728   70908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:53.472771   70908 certs.go:256] generating profile certs ...
	I0311 21:34:53.472883   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key
	I0311 21:34:53.472954   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1
	I0311 21:34:53.473013   70908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key
	I0311 21:34:53.473143   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:53.473185   70908 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:53.473198   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:53.473237   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:53.473272   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:53.473307   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:53.473363   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:53.473988   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:53.527429   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:53.575908   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:53.622438   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:53.665366   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 21:34:53.702121   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0311 21:34:53.746066   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:53.779151   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:53.813286   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:53.847058   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:53.882261   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:53.912444   70908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:53.932592   70908 ssh_runner.go:195] Run: openssl version
	I0311 21:34:53.939200   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:53.955630   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960866   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960920   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.967258   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:53.981075   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:53.995065   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000196   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000272   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.008574   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:54.022782   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:54.037409   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042893   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042965   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.049497   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:54.062597   70908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:54.067971   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:54.074746   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:54.081323   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:54.088762   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:54.095529   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:54.102396   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:54.109553   70908 kubeadm.go:391] StartCluster: {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:54.109639   70908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:54.109689   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.152063   70908 cri.go:89] found id: ""
	I0311 21:34:54.152143   70908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:54.163988   70908 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:54.164005   70908 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:54.164011   70908 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:54.164050   70908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:54.175616   70908 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:54.176779   70908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:54.177542   70908 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-11004/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239315" cluster setting kubeconfig missing "old-k8s-version-239315" context setting]
	I0311 21:34:54.178649   70908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:54.180405   70908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:54.191864   70908 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.52
	I0311 21:34:54.191891   70908 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:54.191903   70908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:54.191948   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.233779   70908 cri.go:89] found id: ""
	I0311 21:34:54.233852   70908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:54.253672   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:54.266010   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:54.266038   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:54.266085   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:54.277867   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:54.277918   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:54.288984   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:54.300133   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:54.300197   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:54.312090   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.323997   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:54.324059   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.337225   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:54.348223   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:54.348266   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:54.359245   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:54.370003   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:54.525972   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.408437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.676995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.819933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.913736   70908 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:55.913811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:56.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:56.914753   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.413928   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.914123   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.413931   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.914199   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.414205   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.913880   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.914121   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:01.414003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:01.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.913977   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.414740   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.914735   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.414726   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.914846   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.914715   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:06.414389   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:06.914233   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.414565   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.914773   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.414348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.914743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.413987   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.914698   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:11.414320   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:11.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.414529   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.914476   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.414282   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.914426   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.414521   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.914001   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.414839   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.913921   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:16.414018   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:16.914685   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.414894   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.914319   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.414875   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.914338   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.414496   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.914396   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.414731   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.914149   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:21.414126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:21.914012   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.414680   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.414478   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.914770   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.414370   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.914772   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.413991   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.914516   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:26.414267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:26.914876   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.414469   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.914513   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.414924   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.914126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.414526   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.914039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.414305   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.914438   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.414610   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.914472   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.414158   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.914169   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.414745   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.914820   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.414071   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.914228   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.414135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.914695   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:36.414435   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:36.914157   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.414539   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.914811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.414070   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.914303   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.413935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.914135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.414569   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.914106   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:41.414404   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:41.914323   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.414215   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.914566   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.414671   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.914658   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.414703   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.913966   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.414045   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.914260   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:46.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:46.914821   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.414210   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.914008   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.413884   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.914160   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.414877   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.914379   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.414293   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.913867   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:51.414582   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:51.914453   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.414668   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.914816   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.414768   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.914592   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.414743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.914307   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.414000   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.914553   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:55.914636   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:55.957434   70908 cri.go:89] found id: ""
	I0311 21:35:55.957459   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.957470   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:55.957477   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:55.957545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:55.995255   70908 cri.go:89] found id: ""
	I0311 21:35:55.995279   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.995290   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:55.995305   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:55.995364   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:56.038893   70908 cri.go:89] found id: ""
	I0311 21:35:56.038916   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.038926   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:56.038933   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:56.038990   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:56.081497   70908 cri.go:89] found id: ""
	I0311 21:35:56.081517   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.081528   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:56.081534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:56.081591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:56.120047   70908 cri.go:89] found id: ""
	I0311 21:35:56.120071   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.120079   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:56.120084   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:56.120156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:56.157350   70908 cri.go:89] found id: ""
	I0311 21:35:56.157370   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.157377   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:56.157382   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:56.157433   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:56.198324   70908 cri.go:89] found id: ""
	I0311 21:35:56.198354   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.198374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:56.198381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:56.198437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:56.236579   70908 cri.go:89] found id: ""
	I0311 21:35:56.236608   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.236619   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:56.236691   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:56.236712   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:56.377789   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:35:56.377809   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:56.377825   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:56.449765   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:56.449807   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:56.502417   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:56.502448   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:56.557205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:56.557241   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.073411   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:59.088205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:59.088287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:59.126458   70908 cri.go:89] found id: ""
	I0311 21:35:59.126486   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.126494   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:59.126499   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:59.126555   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:59.197887   70908 cri.go:89] found id: ""
	I0311 21:35:59.197911   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.197919   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:59.197924   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:59.197967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:59.239523   70908 cri.go:89] found id: ""
	I0311 21:35:59.239552   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.239562   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:59.239570   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:59.239642   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:59.280903   70908 cri.go:89] found id: ""
	I0311 21:35:59.280930   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.280940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:59.280947   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:59.281024   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:59.320218   70908 cri.go:89] found id: ""
	I0311 21:35:59.320242   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.320254   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:59.320260   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:59.320314   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:59.361235   70908 cri.go:89] found id: ""
	I0311 21:35:59.361265   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.361276   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:59.361283   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:59.361352   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:59.409477   70908 cri.go:89] found id: ""
	I0311 21:35:59.409503   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.409514   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:59.409522   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:59.409568   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:59.454704   70908 cri.go:89] found id: ""
	I0311 21:35:59.454728   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.454739   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:59.454748   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:59.454767   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:59.525839   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:59.525864   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:59.569577   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:59.569606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:59.628402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:59.628437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.647181   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:59.647208   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:59.731300   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:02.232458   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:02.246948   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:02.247025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:02.290561   70908 cri.go:89] found id: ""
	I0311 21:36:02.290588   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.290599   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:02.290605   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:02.290659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:02.333788   70908 cri.go:89] found id: ""
	I0311 21:36:02.333814   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.333821   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:02.333826   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:02.333877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:02.375774   70908 cri.go:89] found id: ""
	I0311 21:36:02.375798   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.375806   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:02.375812   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:02.375862   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:02.414741   70908 cri.go:89] found id: ""
	I0311 21:36:02.414781   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.414803   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:02.414810   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:02.414875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:02.456637   70908 cri.go:89] found id: ""
	I0311 21:36:02.456660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.456670   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:02.456677   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:02.456759   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:02.494633   70908 cri.go:89] found id: ""
	I0311 21:36:02.494660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.494670   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:02.494678   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:02.494738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:02.536187   70908 cri.go:89] found id: ""
	I0311 21:36:02.536212   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.536223   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:02.536230   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:02.536291   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:02.574933   70908 cri.go:89] found id: ""
	I0311 21:36:02.574962   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.574973   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:02.574985   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:02.575001   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:02.656610   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:02.656637   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:02.656653   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:02.730514   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:02.730548   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:02.776009   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:02.776041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:02.829792   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:02.829826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.345568   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:05.360082   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:05.360164   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:05.406106   70908 cri.go:89] found id: ""
	I0311 21:36:05.406131   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.406141   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:05.406147   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:05.406203   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:05.449584   70908 cri.go:89] found id: ""
	I0311 21:36:05.449608   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.449617   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:05.449624   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:05.449680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:05.493869   70908 cri.go:89] found id: ""
	I0311 21:36:05.493898   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.493912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:05.493928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:05.493994   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:05.563506   70908 cri.go:89] found id: ""
	I0311 21:36:05.563532   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.563542   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:05.563549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:05.563600   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:05.630140   70908 cri.go:89] found id: ""
	I0311 21:36:05.630165   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.630172   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:05.630177   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:05.630230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:05.675584   70908 cri.go:89] found id: ""
	I0311 21:36:05.675612   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.675623   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:05.675631   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:05.675689   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:05.720521   70908 cri.go:89] found id: ""
	I0311 21:36:05.720548   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.720557   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:05.720563   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:05.720615   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:05.759323   70908 cri.go:89] found id: ""
	I0311 21:36:05.759351   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.759359   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:05.759367   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:05.759379   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:05.801024   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:05.801050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:05.856330   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:05.856356   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.871299   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:05.871324   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:05.950218   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:05.950245   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:05.950259   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:08.535502   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:08.552152   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:08.552220   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:08.596602   70908 cri.go:89] found id: ""
	I0311 21:36:08.596707   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.596731   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:08.596755   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:08.596820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:08.641091   70908 cri.go:89] found id: ""
	I0311 21:36:08.641119   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.641130   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:08.641137   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:08.641198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:08.684466   70908 cri.go:89] found id: ""
	I0311 21:36:08.684494   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.684503   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:08.684510   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:08.684570   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:08.730899   70908 cri.go:89] found id: ""
	I0311 21:36:08.730924   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.730931   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:08.730937   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:08.730997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:08.775293   70908 cri.go:89] found id: ""
	I0311 21:36:08.775317   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.775324   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:08.775330   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:08.775387   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:08.816098   70908 cri.go:89] found id: ""
	I0311 21:36:08.816126   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.816137   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:08.816144   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:08.816207   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:08.857413   70908 cri.go:89] found id: ""
	I0311 21:36:08.857449   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.857460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:08.857476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:08.857541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:08.898252   70908 cri.go:89] found id: ""
	I0311 21:36:08.898283   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.898293   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:08.898302   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:08.898313   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:08.955162   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:08.955188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:08.970234   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:08.970258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:09.055025   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:09.055043   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:09.055055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:09.140345   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:09.140376   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:11.681542   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:11.697407   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:11.697481   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:11.740239   70908 cri.go:89] found id: ""
	I0311 21:36:11.740264   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.740274   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:11.740280   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:11.740336   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:11.777625   70908 cri.go:89] found id: ""
	I0311 21:36:11.777655   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.777667   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:11.777674   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:11.777745   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:11.817202   70908 cri.go:89] found id: ""
	I0311 21:36:11.817226   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.817233   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:11.817239   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:11.817306   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:11.858912   70908 cri.go:89] found id: ""
	I0311 21:36:11.858933   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.858940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:11.858945   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:11.858998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:11.897841   70908 cri.go:89] found id: ""
	I0311 21:36:11.897876   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.897887   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:11.897895   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:11.897955   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:11.936181   70908 cri.go:89] found id: ""
	I0311 21:36:11.936207   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.936218   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:11.936226   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:11.936293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:11.981882   70908 cri.go:89] found id: ""
	I0311 21:36:11.981905   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.981915   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:11.981922   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:11.981982   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:12.022270   70908 cri.go:89] found id: ""
	I0311 21:36:12.022298   70908 logs.go:276] 0 containers: []
	W0311 21:36:12.022309   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:12.022320   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:12.022333   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:12.074640   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:12.074668   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:12.089854   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:12.089879   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:12.179578   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:12.179595   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:12.179606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:12.263249   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:12.263285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:14.811547   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:14.827075   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:14.827175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:14.870512   70908 cri.go:89] found id: ""
	I0311 21:36:14.870544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.870555   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:14.870563   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:14.870625   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:14.908521   70908 cri.go:89] found id: ""
	I0311 21:36:14.908544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.908553   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:14.908558   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:14.908607   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:14.951702   70908 cri.go:89] found id: ""
	I0311 21:36:14.951729   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.951739   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:14.951746   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:14.951805   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:14.992590   70908 cri.go:89] found id: ""
	I0311 21:36:14.992618   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.992630   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:14.992638   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:14.992698   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:15.034535   70908 cri.go:89] found id: ""
	I0311 21:36:15.034556   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.034563   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:15.034569   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:15.034614   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:15.077175   70908 cri.go:89] found id: ""
	I0311 21:36:15.077200   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.077210   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:15.077218   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:15.077283   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:15.121500   70908 cri.go:89] found id: ""
	I0311 21:36:15.121530   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.121541   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:15.121549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:15.121655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:15.162712   70908 cri.go:89] found id: ""
	I0311 21:36:15.162738   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.162748   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:15.162757   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:15.162776   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:15.241469   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:15.241488   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:15.241499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:15.322257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:15.322291   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:15.368258   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:15.368285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:15.427131   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:15.427163   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:17.944348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:17.958629   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:17.958704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:17.995869   70908 cri.go:89] found id: ""
	I0311 21:36:17.995895   70908 logs.go:276] 0 containers: []
	W0311 21:36:17.995904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:17.995914   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:17.995976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:18.032273   70908 cri.go:89] found id: ""
	I0311 21:36:18.032300   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.032308   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:18.032313   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:18.032361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:18.072497   70908 cri.go:89] found id: ""
	I0311 21:36:18.072519   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.072526   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:18.072532   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:18.072578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:18.110091   70908 cri.go:89] found id: ""
	I0311 21:36:18.110119   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.110129   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:18.110136   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:18.110199   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:18.152217   70908 cri.go:89] found id: ""
	I0311 21:36:18.152261   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.152272   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:18.152280   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:18.152347   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:18.193957   70908 cri.go:89] found id: ""
	I0311 21:36:18.193989   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.194000   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:18.194008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:18.194086   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:18.231828   70908 cri.go:89] found id: ""
	I0311 21:36:18.231861   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.231873   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:18.231880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:18.231939   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:18.271862   70908 cri.go:89] found id: ""
	I0311 21:36:18.271896   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.271907   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:18.271917   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:18.271933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:18.325405   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:18.325440   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:18.344560   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:18.344593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:18.425051   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:18.425075   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:18.425093   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:18.513247   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:18.513287   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:21.060499   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:21.076648   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:21.076716   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:21.117270   70908 cri.go:89] found id: ""
	I0311 21:36:21.117298   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.117309   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:21.117317   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:21.117388   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:21.159005   70908 cri.go:89] found id: ""
	I0311 21:36:21.159045   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.159056   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:21.159063   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:21.159122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:21.196576   70908 cri.go:89] found id: ""
	I0311 21:36:21.196599   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.196609   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:21.196617   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:21.196677   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:21.237689   70908 cri.go:89] found id: ""
	I0311 21:36:21.237718   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.237729   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:21.237734   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:21.237783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:21.280662   70908 cri.go:89] found id: ""
	I0311 21:36:21.280696   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.280707   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:21.280714   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:21.280795   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:21.321475   70908 cri.go:89] found id: ""
	I0311 21:36:21.321501   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.321511   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:21.321518   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:21.321581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:21.365186   70908 cri.go:89] found id: ""
	I0311 21:36:21.365209   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.365216   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:21.365221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:21.365276   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:21.408678   70908 cri.go:89] found id: ""
	I0311 21:36:21.408713   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.408725   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:21.408754   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:21.408771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:21.466635   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:21.466663   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:21.482596   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:21.482622   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:21.556750   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:21.556769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:21.556780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:21.643095   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:21.643126   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.195112   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:24.208829   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:24.208895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:24.245956   70908 cri.go:89] found id: ""
	I0311 21:36:24.245981   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.245989   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:24.245995   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:24.246053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:24.289740   70908 cri.go:89] found id: ""
	I0311 21:36:24.289766   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.289778   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:24.289784   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:24.289846   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:24.336911   70908 cri.go:89] found id: ""
	I0311 21:36:24.336963   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.336977   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:24.336986   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:24.337057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:24.381715   70908 cri.go:89] found id: ""
	I0311 21:36:24.381739   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.381753   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:24.381761   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:24.381817   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:24.423759   70908 cri.go:89] found id: ""
	I0311 21:36:24.423787   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.423797   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:24.423805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:24.423882   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:24.468903   70908 cri.go:89] found id: ""
	I0311 21:36:24.468931   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.468946   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:24.468954   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:24.469013   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:24.509602   70908 cri.go:89] found id: ""
	I0311 21:36:24.509629   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.509639   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:24.509646   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:24.509706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:24.551483   70908 cri.go:89] found id: ""
	I0311 21:36:24.551511   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.551522   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:24.551532   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:24.551545   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:24.567123   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:24.567154   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:24.644215   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:24.644247   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:24.644262   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:24.726438   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:24.726469   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.779567   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:24.779596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:27.337785   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:27.352504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:27.352578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:27.395787   70908 cri.go:89] found id: ""
	I0311 21:36:27.395809   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.395817   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:27.395823   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:27.395869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:27.441800   70908 cri.go:89] found id: ""
	I0311 21:36:27.441826   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.441834   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:27.441839   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:27.441893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:27.481761   70908 cri.go:89] found id: ""
	I0311 21:36:27.481791   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.481802   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:27.481809   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:27.481868   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:27.526981   70908 cri.go:89] found id: ""
	I0311 21:36:27.527011   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.527029   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:27.527037   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:27.527130   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:27.566569   70908 cri.go:89] found id: ""
	I0311 21:36:27.566602   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.566614   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:27.566622   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:27.566682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:27.607434   70908 cri.go:89] found id: ""
	I0311 21:36:27.607456   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.607464   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:27.607469   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:27.607529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:27.652648   70908 cri.go:89] found id: ""
	I0311 21:36:27.652674   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.652681   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:27.652686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:27.652756   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:27.691105   70908 cri.go:89] found id: ""
	I0311 21:36:27.691136   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.691148   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:27.691158   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:27.691173   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:27.706451   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:27.706477   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:27.788935   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:27.788959   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:27.788975   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:27.875721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:27.875758   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:27.927920   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:27.927951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.487728   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:30.503425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:30.503508   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:30.550846   70908 cri.go:89] found id: ""
	I0311 21:36:30.550868   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.550875   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:30.550881   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:30.550928   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:30.586886   70908 cri.go:89] found id: ""
	I0311 21:36:30.586915   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.586925   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:30.586934   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:30.586991   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:30.627849   70908 cri.go:89] found id: ""
	I0311 21:36:30.627884   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.627895   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:30.627902   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:30.627965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:30.669188   70908 cri.go:89] found id: ""
	I0311 21:36:30.669209   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.669216   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:30.669222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:30.669266   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:30.711676   70908 cri.go:89] found id: ""
	I0311 21:36:30.711697   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.711705   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:30.711710   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:30.711758   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:30.754218   70908 cri.go:89] found id: ""
	I0311 21:36:30.754240   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.754248   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:30.754253   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:30.754299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:30.791224   70908 cri.go:89] found id: ""
	I0311 21:36:30.791255   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.791263   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:30.791269   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:30.791328   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:30.831263   70908 cri.go:89] found id: ""
	I0311 21:36:30.831291   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.831301   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:30.831311   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:30.831326   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:30.876574   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:30.876600   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.928483   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:30.928509   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:30.944642   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:30.944665   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:31.026406   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:31.026428   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:31.026444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:33.611104   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:33.625644   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:33.625706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:33.664787   70908 cri.go:89] found id: ""
	I0311 21:36:33.664816   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.664825   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:33.664830   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:33.664894   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:33.704636   70908 cri.go:89] found id: ""
	I0311 21:36:33.704659   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.704666   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:33.704672   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:33.704717   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:33.744797   70908 cri.go:89] found id: ""
	I0311 21:36:33.744837   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.744848   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:33.744855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:33.744917   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:33.787435   70908 cri.go:89] found id: ""
	I0311 21:36:33.787464   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.787474   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:33.787482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:33.787541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:33.826578   70908 cri.go:89] found id: ""
	I0311 21:36:33.826606   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.826617   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:33.826624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:33.826684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:33.864854   70908 cri.go:89] found id: ""
	I0311 21:36:33.864875   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.864882   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:33.864887   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:33.864934   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:33.905366   70908 cri.go:89] found id: ""
	I0311 21:36:33.905397   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.905409   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:33.905416   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:33.905477   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:33.950196   70908 cri.go:89] found id: ""
	I0311 21:36:33.950222   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.950232   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:33.950243   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:33.950258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:34.001016   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:34.001049   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:34.059102   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:34.059131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:34.075879   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:34.075908   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:34.177114   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:34.177138   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:34.177161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:36.756459   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:36.772781   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:36.772867   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:36.820076   70908 cri.go:89] found id: ""
	I0311 21:36:36.820103   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.820111   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:36.820118   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:36.820169   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:36.859279   70908 cri.go:89] found id: ""
	I0311 21:36:36.859306   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.859317   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:36.859324   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:36.859383   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:36.899669   70908 cri.go:89] found id: ""
	I0311 21:36:36.899694   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.899705   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:36.899712   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:36.899770   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:36.938826   70908 cri.go:89] found id: ""
	I0311 21:36:36.938853   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.938864   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:36.938872   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:36.938957   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:36.976659   70908 cri.go:89] found id: ""
	I0311 21:36:36.976685   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.976693   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:36.976703   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:36.976772   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:37.015439   70908 cri.go:89] found id: ""
	I0311 21:36:37.015462   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.015469   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:37.015474   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:37.015519   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:37.057469   70908 cri.go:89] found id: ""
	I0311 21:36:37.057496   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.057507   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:37.057514   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:37.057579   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:37.106287   70908 cri.go:89] found id: ""
	I0311 21:36:37.106316   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.106325   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:37.106335   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:37.106352   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:37.122333   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:37.122367   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:37.197708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:37.197731   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:37.197742   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:37.281911   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:37.281944   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:37.335978   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:37.336011   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:39.891583   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:39.914741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:39.914823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:39.955751   70908 cri.go:89] found id: ""
	I0311 21:36:39.955773   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.955781   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:39.955786   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:39.955837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:39.997604   70908 cri.go:89] found id: ""
	I0311 21:36:39.997632   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.997642   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:39.997649   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:39.997711   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:40.039138   70908 cri.go:89] found id: ""
	I0311 21:36:40.039168   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.039178   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:40.039186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:40.039230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:40.079906   70908 cri.go:89] found id: ""
	I0311 21:36:40.079934   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.079945   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:40.079952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:40.080017   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:40.124116   70908 cri.go:89] found id: ""
	I0311 21:36:40.124141   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.124152   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:40.124159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:40.124221   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:40.165078   70908 cri.go:89] found id: ""
	I0311 21:36:40.165099   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.165108   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:40.165113   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:40.165158   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:40.203928   70908 cri.go:89] found id: ""
	I0311 21:36:40.203954   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.203962   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:40.203971   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:40.204018   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:40.244755   70908 cri.go:89] found id: ""
	I0311 21:36:40.244783   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.244793   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:40.244803   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:40.244819   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:40.302090   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:40.302125   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:40.318071   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:40.318097   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:40.405336   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:40.405363   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:40.405378   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:40.493262   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:40.493298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:43.052419   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:43.068300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:43.068378   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:43.109665   70908 cri.go:89] found id: ""
	I0311 21:36:43.109701   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.109717   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:43.109725   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:43.109789   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:43.152233   70908 cri.go:89] found id: ""
	I0311 21:36:43.152253   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.152260   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:43.152265   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:43.152311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:43.194969   70908 cri.go:89] found id: ""
	I0311 21:36:43.194995   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.195002   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:43.195008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:43.195056   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:43.234555   70908 cri.go:89] found id: ""
	I0311 21:36:43.234581   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.234592   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:43.234597   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:43.234651   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:43.275188   70908 cri.go:89] found id: ""
	I0311 21:36:43.275214   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.275224   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:43.275232   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:43.275287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:43.314481   70908 cri.go:89] found id: ""
	I0311 21:36:43.314507   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.314515   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:43.314521   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:43.314580   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:43.353287   70908 cri.go:89] found id: ""
	I0311 21:36:43.353317   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.353328   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:43.353336   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:43.353395   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:43.396112   70908 cri.go:89] found id: ""
	I0311 21:36:43.396138   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.396150   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:43.396160   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:43.396175   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:43.456116   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:43.456143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:43.472992   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:43.473023   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:43.558281   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:43.558311   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:43.558327   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:43.641849   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:43.641885   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:46.187444   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:46.202848   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:46.202911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:46.244843   70908 cri.go:89] found id: ""
	I0311 21:36:46.244872   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.244880   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:46.244886   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:46.244933   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:46.297789   70908 cri.go:89] found id: ""
	I0311 21:36:46.297820   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.297831   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:46.297838   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:46.297903   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:46.353104   70908 cri.go:89] found id: ""
	I0311 21:36:46.353127   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.353134   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:46.353140   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:46.353211   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:46.426767   70908 cri.go:89] found id: ""
	I0311 21:36:46.426792   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.426799   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:46.426804   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:46.426858   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:46.469850   70908 cri.go:89] found id: ""
	I0311 21:36:46.469881   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.469891   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:46.469899   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:46.469960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:46.510692   70908 cri.go:89] found id: ""
	I0311 21:36:46.510718   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.510726   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:46.510732   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:46.510787   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:46.554445   70908 cri.go:89] found id: ""
	I0311 21:36:46.554468   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.554475   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:46.554482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:46.554527   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:46.592417   70908 cri.go:89] found id: ""
	I0311 21:36:46.592448   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.592458   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:46.592467   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:46.592480   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:46.607106   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:46.607146   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:46.691556   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:46.691575   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:46.691587   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:46.772468   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:46.772503   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:46.814478   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:46.814512   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.368451   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:49.383504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:49.383573   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:49.427392   70908 cri.go:89] found id: ""
	I0311 21:36:49.427415   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.427426   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:49.427434   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:49.427493   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:49.469022   70908 cri.go:89] found id: ""
	I0311 21:36:49.469044   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.469052   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:49.469059   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:49.469106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:49.510755   70908 cri.go:89] found id: ""
	I0311 21:36:49.510781   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.510792   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:49.510800   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:49.510886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:49.556594   70908 cri.go:89] found id: ""
	I0311 21:36:49.556631   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.556642   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:49.556649   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:49.556710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:49.597035   70908 cri.go:89] found id: ""
	I0311 21:36:49.597059   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.597067   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:49.597072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:49.597138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:49.642947   70908 cri.go:89] found id: ""
	I0311 21:36:49.642975   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.642985   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:49.642993   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:49.643051   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:49.681401   70908 cri.go:89] found id: ""
	I0311 21:36:49.681423   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.681430   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:49.681435   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:49.681478   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:49.718498   70908 cri.go:89] found id: ""
	I0311 21:36:49.718529   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.718539   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:49.718549   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:49.718563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:49.764483   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:49.764515   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.821261   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:49.821293   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:49.837110   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:49.837135   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:49.918507   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:49.918529   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:49.918541   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:52.500354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:52.516722   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:52.516811   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:52.563312   70908 cri.go:89] found id: ""
	I0311 21:36:52.563340   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.563354   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:52.563362   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:52.563421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:52.603545   70908 cri.go:89] found id: ""
	I0311 21:36:52.603572   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.603581   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:52.603588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:52.603657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:52.645624   70908 cri.go:89] found id: ""
	I0311 21:36:52.645648   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.645658   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:52.645665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:52.645722   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:52.693335   70908 cri.go:89] found id: ""
	I0311 21:36:52.693363   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.693373   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:52.693380   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:52.693437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:52.740272   70908 cri.go:89] found id: ""
	I0311 21:36:52.740310   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.740331   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:52.740341   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:52.740398   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:52.786241   70908 cri.go:89] found id: ""
	I0311 21:36:52.786276   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.786285   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:52.786291   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:52.786355   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:52.825013   70908 cri.go:89] found id: ""
	I0311 21:36:52.825042   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.825053   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:52.825061   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:52.825117   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:52.862867   70908 cri.go:89] found id: ""
	I0311 21:36:52.862892   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.862901   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:52.862908   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:52.862922   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:52.917005   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:52.917036   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:52.932086   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:52.932112   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:53.012379   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:53.012402   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:53.012413   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:53.096881   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:53.096913   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:55.640142   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:55.656664   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:55.656749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:55.697962   70908 cri.go:89] found id: ""
	I0311 21:36:55.697992   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.698000   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:55.698005   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:55.698059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:55.741888   70908 cri.go:89] found id: ""
	I0311 21:36:55.741910   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.741917   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:55.741921   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:55.741965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:55.779352   70908 cri.go:89] found id: ""
	I0311 21:36:55.779372   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.779381   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:55.779386   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:55.779430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:55.819496   70908 cri.go:89] found id: ""
	I0311 21:36:55.819530   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.819541   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:55.819549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:55.819612   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:55.859384   70908 cri.go:89] found id: ""
	I0311 21:36:55.859412   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.859419   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:55.859424   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:55.859473   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:55.899415   70908 cri.go:89] found id: ""
	I0311 21:36:55.899438   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.899445   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:55.899450   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:55.899496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:55.938595   70908 cri.go:89] found id: ""
	I0311 21:36:55.938625   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.938637   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:55.938645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:55.938710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:55.980064   70908 cri.go:89] found id: ""
	I0311 21:36:55.980089   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.980096   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:55.980103   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:55.980115   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:55.996222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:55.996297   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:56.081046   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:56.081074   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:56.081090   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:56.167748   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:56.167773   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:56.221118   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:56.221150   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:58.772403   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:58.789349   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:58.789421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:58.829945   70908 cri.go:89] found id: ""
	I0311 21:36:58.829974   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.829985   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:58.829993   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:58.830059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:58.877190   70908 cri.go:89] found id: ""
	I0311 21:36:58.877214   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.877224   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:58.877231   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:58.877295   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:58.920086   70908 cri.go:89] found id: ""
	I0311 21:36:58.920113   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.920122   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:58.920128   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:58.920189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:58.956864   70908 cri.go:89] found id: ""
	I0311 21:36:58.956890   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.956900   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:58.956907   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:58.956967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:58.999363   70908 cri.go:89] found id: ""
	I0311 21:36:58.999390   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.999400   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:58.999408   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:58.999469   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:59.041759   70908 cri.go:89] found id: ""
	I0311 21:36:59.041787   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.041797   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:59.041803   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:59.041850   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:59.084378   70908 cri.go:89] found id: ""
	I0311 21:36:59.084406   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.084417   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:59.084425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:59.084479   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:59.124105   70908 cri.go:89] found id: ""
	I0311 21:36:59.124151   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.124163   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:59.124173   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:59.124188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:59.202060   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:59.202083   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:59.202098   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:59.284025   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:59.284060   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:59.327926   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:59.327951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:59.382505   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:59.382533   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:01.900084   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:01.914495   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:01.914552   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:01.956887   70908 cri.go:89] found id: ""
	I0311 21:37:01.956912   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.956922   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:01.956929   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:01.956986   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:01.995358   70908 cri.go:89] found id: ""
	I0311 21:37:01.995385   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.995394   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:01.995399   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:01.995448   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:02.033949   70908 cri.go:89] found id: ""
	I0311 21:37:02.033974   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.033984   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:02.033991   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:02.034049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:02.074348   70908 cri.go:89] found id: ""
	I0311 21:37:02.074372   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.074382   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:02.074390   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:02.074449   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:02.112456   70908 cri.go:89] found id: ""
	I0311 21:37:02.112479   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.112486   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:02.112491   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:02.112554   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:02.155102   70908 cri.go:89] found id: ""
	I0311 21:37:02.155130   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.155138   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:02.155149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:02.155205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:02.191359   70908 cri.go:89] found id: ""
	I0311 21:37:02.191386   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.191393   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:02.191399   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:02.191450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:02.236178   70908 cri.go:89] found id: ""
	I0311 21:37:02.236203   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.236211   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:02.236220   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:02.236231   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:02.285794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:02.285818   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:02.342348   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:02.342387   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:02.357230   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:02.357257   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:02.431044   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:02.431064   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:02.431076   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.019473   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:05.035841   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:05.035901   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:05.082013   70908 cri.go:89] found id: ""
	I0311 21:37:05.082034   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.082041   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:05.082046   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:05.082091   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:05.126236   70908 cri.go:89] found id: ""
	I0311 21:37:05.126257   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.126265   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:05.126270   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:05.126311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:05.170573   70908 cri.go:89] found id: ""
	I0311 21:37:05.170601   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.170608   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:05.170614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:05.170658   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:05.213921   70908 cri.go:89] found id: ""
	I0311 21:37:05.213948   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.213958   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:05.213965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:05.214025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:05.261178   70908 cri.go:89] found id: ""
	I0311 21:37:05.261206   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.261213   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:05.261221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:05.261273   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:05.306007   70908 cri.go:89] found id: ""
	I0311 21:37:05.306037   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.306045   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:05.306051   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:05.306106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:05.346653   70908 cri.go:89] found id: ""
	I0311 21:37:05.346679   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.346688   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:05.346694   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:05.346752   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:05.384587   70908 cri.go:89] found id: ""
	I0311 21:37:05.384626   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.384637   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:05.384648   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:05.384664   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:05.440676   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:05.440709   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:05.456989   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:05.457018   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:05.553900   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:05.553932   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:05.553947   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.633270   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:05.633300   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:08.181935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:08.198179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:08.198251   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:08.236484   70908 cri.go:89] found id: ""
	I0311 21:37:08.236506   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.236516   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:08.236524   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:08.236578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:08.277701   70908 cri.go:89] found id: ""
	I0311 21:37:08.277731   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.277739   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:08.277745   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:08.277804   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:08.319559   70908 cri.go:89] found id: ""
	I0311 21:37:08.319585   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.319596   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:08.319604   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:08.319666   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:08.359752   70908 cri.go:89] found id: ""
	I0311 21:37:08.359777   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.359785   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:08.359791   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:08.359849   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:08.397432   70908 cri.go:89] found id: ""
	I0311 21:37:08.397453   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.397460   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:08.397465   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:08.397511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:08.438708   70908 cri.go:89] found id: ""
	I0311 21:37:08.438732   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.438742   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:08.438749   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:08.438807   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:08.479511   70908 cri.go:89] found id: ""
	I0311 21:37:08.479533   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.479560   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:08.479566   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:08.479620   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:08.521634   70908 cri.go:89] found id: ""
	I0311 21:37:08.521659   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.521670   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:08.521680   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:08.521693   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:08.577033   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:08.577065   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:08.592006   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:08.592030   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:08.680862   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:08.680903   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:08.680919   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:08.764991   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:08.765037   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:11.313168   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:11.326808   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:11.326876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:11.364223   70908 cri.go:89] found id: ""
	I0311 21:37:11.364246   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.364254   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:11.364259   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:11.364311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:11.401361   70908 cri.go:89] found id: ""
	I0311 21:37:11.401391   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.401402   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:11.401409   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:11.401459   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:11.441927   70908 cri.go:89] found id: ""
	I0311 21:37:11.441950   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.441957   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:11.441962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:11.442015   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:11.480804   70908 cri.go:89] found id: ""
	I0311 21:37:11.480836   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.480847   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:11.480855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:11.480913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:11.520135   70908 cri.go:89] found id: ""
	I0311 21:37:11.520166   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.520177   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:11.520193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:11.520255   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:11.559214   70908 cri.go:89] found id: ""
	I0311 21:37:11.559244   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.559255   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:11.559263   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:11.559322   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:11.597346   70908 cri.go:89] found id: ""
	I0311 21:37:11.597374   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.597383   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:11.597391   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:11.597452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:11.646095   70908 cri.go:89] found id: ""
	I0311 21:37:11.646118   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.646127   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:11.646137   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:11.646167   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:11.691813   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:11.691844   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:11.745270   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:11.745303   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:11.761107   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:11.761131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:11.841033   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:11.841059   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:11.841074   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:14.431709   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:14.447064   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:14.447131   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:14.493094   70908 cri.go:89] found id: ""
	I0311 21:37:14.493132   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.493140   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:14.493146   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:14.493195   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:14.537391   70908 cri.go:89] found id: ""
	I0311 21:37:14.537415   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.537423   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:14.537428   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:14.537487   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:14.576284   70908 cri.go:89] found id: ""
	I0311 21:37:14.576306   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.576313   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:14.576319   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:14.576375   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:14.627057   70908 cri.go:89] found id: ""
	I0311 21:37:14.627086   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.627097   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:14.627105   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:14.627163   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:14.669204   70908 cri.go:89] found id: ""
	I0311 21:37:14.669226   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.669233   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:14.669238   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:14.669293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:14.708787   70908 cri.go:89] found id: ""
	I0311 21:37:14.708812   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.708820   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:14.708826   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:14.708892   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:14.749795   70908 cri.go:89] found id: ""
	I0311 21:37:14.749819   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.749828   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:14.749835   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:14.749893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:14.794871   70908 cri.go:89] found id: ""
	I0311 21:37:14.794900   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.794911   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:14.794922   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:14.794936   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:14.850022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:14.850050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:14.866589   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:14.866618   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:14.968887   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:14.968906   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:14.968921   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:15.047376   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:15.047404   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:17.599834   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:17.613610   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:17.613665   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:17.655340   70908 cri.go:89] found id: ""
	I0311 21:37:17.655361   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.655369   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:17.655374   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:17.655416   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:17.695071   70908 cri.go:89] found id: ""
	I0311 21:37:17.695103   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.695114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:17.695121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:17.695178   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:17.731914   70908 cri.go:89] found id: ""
	I0311 21:37:17.731938   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.731946   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:17.731952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:17.732012   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:17.768198   70908 cri.go:89] found id: ""
	I0311 21:37:17.768224   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.768236   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:17.768242   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:17.768301   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:17.802881   70908 cri.go:89] found id: ""
	I0311 21:37:17.802909   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.802920   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:17.802928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:17.802983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:17.841660   70908 cri.go:89] found id: ""
	I0311 21:37:17.841684   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.841692   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:17.841698   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:17.841749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:17.880154   70908 cri.go:89] found id: ""
	I0311 21:37:17.880183   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.880196   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:17.880205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:17.880260   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:17.919797   70908 cri.go:89] found id: ""
	I0311 21:37:17.919822   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.919829   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:17.919837   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:17.919847   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:17.976607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:17.976636   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:17.993313   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:17.993339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:18.069928   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:18.069956   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:18.069973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:18.152257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:18.152285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:20.706553   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:20.721148   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:20.721214   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:20.762913   70908 cri.go:89] found id: ""
	I0311 21:37:20.762935   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.762943   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:20.762952   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:20.762997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:20.811120   70908 cri.go:89] found id: ""
	I0311 21:37:20.811147   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.811158   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:20.811165   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:20.811225   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:20.848987   70908 cri.go:89] found id: ""
	I0311 21:37:20.849015   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.849026   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:20.849033   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:20.849098   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:20.896201   70908 cri.go:89] found id: ""
	I0311 21:37:20.896226   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.896233   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:20.896240   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:20.896299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:20.936570   70908 cri.go:89] found id: ""
	I0311 21:37:20.936595   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.936603   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:20.936608   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:20.936657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:20.977535   70908 cri.go:89] found id: ""
	I0311 21:37:20.977565   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.977576   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:20.977584   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:20.977647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:21.015370   70908 cri.go:89] found id: ""
	I0311 21:37:21.015395   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.015405   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:21.015413   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:21.015472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:21.056190   70908 cri.go:89] found id: ""
	I0311 21:37:21.056214   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.056225   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:21.056235   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:21.056255   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:21.112022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:21.112051   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:21.128841   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:21.128872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:21.209690   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:21.209716   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:21.209732   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:21.291064   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:21.291099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:23.844334   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:23.860000   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:23.860061   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:23.899777   70908 cri.go:89] found id: ""
	I0311 21:37:23.899805   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.899814   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:23.899820   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:23.899879   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:23.941510   70908 cri.go:89] found id: ""
	I0311 21:37:23.941537   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.941547   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:23.941555   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:23.941627   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:23.980564   70908 cri.go:89] found id: ""
	I0311 21:37:23.980592   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.980602   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:23.980614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:23.980676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:24.020310   70908 cri.go:89] found id: ""
	I0311 21:37:24.020337   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.020348   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:24.020354   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:24.020410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:24.059320   70908 cri.go:89] found id: ""
	I0311 21:37:24.059349   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.059359   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:24.059367   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:24.059424   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:24.096625   70908 cri.go:89] found id: ""
	I0311 21:37:24.096652   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.096660   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:24.096666   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:24.096723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:24.137068   70908 cri.go:89] found id: ""
	I0311 21:37:24.137100   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.137112   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:24.137121   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:24.137182   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:24.181298   70908 cri.go:89] found id: ""
	I0311 21:37:24.181325   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.181336   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:24.181348   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:24.181364   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:24.265423   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:24.265454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:24.318088   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:24.318113   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:24.374402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:24.374430   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:24.388934   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:24.388962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:24.475842   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:26.976017   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:26.991533   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:26.991602   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:27.034750   70908 cri.go:89] found id: ""
	I0311 21:37:27.034769   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.034776   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:27.034781   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:27.034837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:27.073275   70908 cri.go:89] found id: ""
	I0311 21:37:27.073301   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.073309   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:27.073317   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:27.073363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:27.113396   70908 cri.go:89] found id: ""
	I0311 21:37:27.113418   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.113425   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:27.113431   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:27.113482   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:27.157442   70908 cri.go:89] found id: ""
	I0311 21:37:27.157465   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.157475   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:27.157482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:27.157534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:27.197277   70908 cri.go:89] found id: ""
	I0311 21:37:27.197302   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.197309   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:27.197315   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:27.197363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:27.237967   70908 cri.go:89] found id: ""
	I0311 21:37:27.237991   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.237999   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:27.238005   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:27.238077   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:27.280434   70908 cri.go:89] found id: ""
	I0311 21:37:27.280459   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.280467   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:27.280472   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:27.280535   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:27.334940   70908 cri.go:89] found id: ""
	I0311 21:37:27.334970   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.334982   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:27.334992   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:27.335010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:27.402535   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:27.402570   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:27.416758   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:27.416787   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:27.492762   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:27.492786   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:27.492803   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:27.576989   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:27.577032   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.124039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:30.138419   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:30.138483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:30.180900   70908 cri.go:89] found id: ""
	I0311 21:37:30.180926   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.180936   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:30.180944   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:30.180998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:30.222886   70908 cri.go:89] found id: ""
	I0311 21:37:30.222913   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.222921   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:30.222926   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:30.222976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:30.264332   70908 cri.go:89] found id: ""
	I0311 21:37:30.264357   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.264367   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:30.264376   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:30.264436   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:30.307084   70908 cri.go:89] found id: ""
	I0311 21:37:30.307112   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.307123   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:30.307130   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:30.307188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:30.345954   70908 cri.go:89] found id: ""
	I0311 21:37:30.345979   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.345990   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:30.345997   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:30.346057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:30.389408   70908 cri.go:89] found id: ""
	I0311 21:37:30.389439   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.389450   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:30.389457   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:30.389517   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:30.438380   70908 cri.go:89] found id: ""
	I0311 21:37:30.438410   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.438420   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:30.438427   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:30.438489   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:30.479860   70908 cri.go:89] found id: ""
	I0311 21:37:30.479884   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.479895   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:30.479906   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:30.479920   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:30.535831   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:30.535857   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:30.552702   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:30.552725   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:30.633417   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:30.633439   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:30.633454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:30.723106   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:30.723143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:33.270654   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:33.296640   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:33.296710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:33.366053   70908 cri.go:89] found id: ""
	I0311 21:37:33.366082   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.366093   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:33.366101   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:33.366161   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:33.421455   70908 cri.go:89] found id: ""
	I0311 21:37:33.421488   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.421501   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:33.421509   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:33.421583   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:33.464555   70908 cri.go:89] found id: ""
	I0311 21:37:33.464579   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.464586   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:33.464592   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:33.464647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:33.507044   70908 cri.go:89] found id: ""
	I0311 21:37:33.507086   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.507100   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:33.507110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:33.507175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:33.561446   70908 cri.go:89] found id: ""
	I0311 21:37:33.561518   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.561532   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:33.561540   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:33.561601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:33.604496   70908 cri.go:89] found id: ""
	I0311 21:37:33.604519   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.604528   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:33.604534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:33.604591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:33.645754   70908 cri.go:89] found id: ""
	I0311 21:37:33.645781   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.645791   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:33.645797   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:33.645869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:33.690041   70908 cri.go:89] found id: ""
	I0311 21:37:33.690071   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.690082   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:33.690092   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:33.690108   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:33.765708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:33.765737   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:33.765752   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:33.848869   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:33.848906   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:33.900191   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:33.900223   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:33.957101   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:33.957138   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:36.474442   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:36.490159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:36.490231   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:36.537784   70908 cri.go:89] found id: ""
	I0311 21:37:36.537812   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.537822   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:36.537829   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:36.537885   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:36.581192   70908 cri.go:89] found id: ""
	I0311 21:37:36.581219   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.581230   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:36.581237   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:36.581297   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:36.620448   70908 cri.go:89] found id: ""
	I0311 21:37:36.620480   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.620492   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:36.620501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:36.620566   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:36.662135   70908 cri.go:89] found id: ""
	I0311 21:37:36.662182   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.662193   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:36.662203   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:36.662268   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:36.708138   70908 cri.go:89] found id: ""
	I0311 21:37:36.708178   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.708188   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:36.708198   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:36.708267   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:36.749668   70908 cri.go:89] found id: ""
	I0311 21:37:36.749697   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.749708   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:36.749717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:36.749783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:36.788455   70908 cri.go:89] found id: ""
	I0311 21:37:36.788476   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.788483   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:36.788488   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:36.788534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:36.830216   70908 cri.go:89] found id: ""
	I0311 21:37:36.830244   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.830257   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:36.830267   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:36.830285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:36.915306   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:36.915336   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:36.958861   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:36.958892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:37.014463   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:37.014489   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:37.029979   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:37.030010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:37.106840   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:39.607929   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:39.626247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:39.626307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:39.667409   70908 cri.go:89] found id: ""
	I0311 21:37:39.667436   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.667446   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:39.667454   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:39.667509   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:39.714167   70908 cri.go:89] found id: ""
	I0311 21:37:39.714198   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.714210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:39.714217   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:39.714275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:39.754759   70908 cri.go:89] found id: ""
	I0311 21:37:39.754787   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.754798   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:39.754805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:39.754865   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:39.794999   70908 cri.go:89] found id: ""
	I0311 21:37:39.795028   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.795038   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:39.795045   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:39.795108   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:39.836284   70908 cri.go:89] found id: ""
	I0311 21:37:39.836310   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.836321   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:39.836328   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:39.836386   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:39.876487   70908 cri.go:89] found id: ""
	I0311 21:37:39.876518   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.876530   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:39.876539   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:39.876601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:39.918750   70908 cri.go:89] found id: ""
	I0311 21:37:39.918785   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.918796   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:39.918813   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:39.918871   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:39.958486   70908 cri.go:89] found id: ""
	I0311 21:37:39.958517   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.958529   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:39.958537   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:39.958550   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:39.973899   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:39.973925   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:40.055954   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:40.055980   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:40.055995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:40.144801   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:40.144826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:40.189692   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:40.189722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:42.748909   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:42.763794   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:42.763877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:42.801470   70908 cri.go:89] found id: ""
	I0311 21:37:42.801493   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.801500   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:42.801506   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:42.801561   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:42.846267   70908 cri.go:89] found id: ""
	I0311 21:37:42.846294   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.846301   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:42.846307   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:42.846357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:42.890257   70908 cri.go:89] found id: ""
	I0311 21:37:42.890283   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.890294   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:42.890301   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:42.890357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:42.933605   70908 cri.go:89] found id: ""
	I0311 21:37:42.933628   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.933636   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:42.933643   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:42.933699   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:42.979020   70908 cri.go:89] found id: ""
	I0311 21:37:42.979043   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.979052   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:42.979059   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:42.979122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:43.021695   70908 cri.go:89] found id: ""
	I0311 21:37:43.021724   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.021734   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:43.021741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:43.021801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:43.064356   70908 cri.go:89] found id: ""
	I0311 21:37:43.064398   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.064406   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:43.064412   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:43.064457   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:43.101878   70908 cri.go:89] found id: ""
	I0311 21:37:43.101901   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.101909   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:43.101917   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:43.101930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:43.185836   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:43.185861   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:43.185874   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:43.268879   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:43.268912   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:43.319582   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:43.319614   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:43.374996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:43.375022   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:45.890408   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:45.905973   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:45.906041   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:45.951994   70908 cri.go:89] found id: ""
	I0311 21:37:45.952025   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.952040   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:45.952049   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:45.952112   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:45.992913   70908 cri.go:89] found id: ""
	I0311 21:37:45.992953   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.992964   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:45.992971   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:45.993034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:46.036306   70908 cri.go:89] found id: ""
	I0311 21:37:46.036334   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.036345   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:46.036353   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:46.036410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:46.077532   70908 cri.go:89] found id: ""
	I0311 21:37:46.077564   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.077576   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:46.077583   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:46.077633   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:46.115953   70908 cri.go:89] found id: ""
	I0311 21:37:46.115976   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.115983   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:46.115990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:46.116072   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:46.155665   70908 cri.go:89] found id: ""
	I0311 21:37:46.155699   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.155709   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:46.155717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:46.155775   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:46.197650   70908 cri.go:89] found id: ""
	I0311 21:37:46.197677   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.197696   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:46.197705   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:46.197766   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:46.243006   70908 cri.go:89] found id: ""
	I0311 21:37:46.243030   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.243037   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:46.243045   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:46.243058   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:46.294668   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:46.294696   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:46.308700   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:46.308721   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:46.387188   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:46.387207   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:46.387219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:46.480390   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:46.480423   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.027202   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:49.042292   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:49.042361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:49.081547   70908 cri.go:89] found id: ""
	I0311 21:37:49.081568   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.081579   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:49.081585   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:49.081632   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:49.127438   70908 cri.go:89] found id: ""
	I0311 21:37:49.127467   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.127477   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:49.127485   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:49.127545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:49.173992   70908 cri.go:89] found id: ""
	I0311 21:37:49.174024   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.174033   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:49.174042   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:49.174114   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:49.217087   70908 cri.go:89] found id: ""
	I0311 21:37:49.217120   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.217130   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:49.217138   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:49.217198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:49.255929   70908 cri.go:89] found id: ""
	I0311 21:37:49.255955   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.255970   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:49.255978   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:49.256037   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:49.296373   70908 cri.go:89] found id: ""
	I0311 21:37:49.296399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.296409   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:49.296417   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:49.296474   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:49.335063   70908 cri.go:89] found id: ""
	I0311 21:37:49.335092   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.335103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:49.335110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:49.335176   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:49.378374   70908 cri.go:89] found id: ""
	I0311 21:37:49.378399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.378406   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:49.378414   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:49.378427   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.422193   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:49.422220   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:49.474861   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:49.474893   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:49.490193   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:49.490219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:49.571857   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:49.571880   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:49.571895   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:52.168934   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:52.183086   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:52.183154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:52.221632   70908 cri.go:89] found id: ""
	I0311 21:37:52.221664   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.221675   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:52.221682   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:52.221743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:52.261550   70908 cri.go:89] found id: ""
	I0311 21:37:52.261575   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.261582   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:52.261588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:52.261638   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:52.302879   70908 cri.go:89] found id: ""
	I0311 21:37:52.302910   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.302920   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:52.302927   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:52.302987   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:52.346462   70908 cri.go:89] found id: ""
	I0311 21:37:52.346485   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.346494   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:52.346499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:52.346551   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:52.387949   70908 cri.go:89] found id: ""
	I0311 21:37:52.387977   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.387988   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:52.387995   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:52.388052   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:52.428527   70908 cri.go:89] found id: ""
	I0311 21:37:52.428564   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.428574   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:52.428582   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:52.428649   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:52.469516   70908 cri.go:89] found id: ""
	I0311 21:37:52.469548   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.469558   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:52.469565   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:52.469616   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:52.508371   70908 cri.go:89] found id: ""
	I0311 21:37:52.508407   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.508417   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:52.508429   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:52.508444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:52.587309   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:52.587346   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:52.587361   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:52.666419   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:52.666449   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:52.713150   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:52.713184   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:52.768011   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:52.768041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.284835   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:55.298742   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:55.298799   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:55.340215   70908 cri.go:89] found id: ""
	I0311 21:37:55.340240   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.340251   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:55.340257   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:55.340321   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:55.377930   70908 cri.go:89] found id: ""
	I0311 21:37:55.377956   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.377967   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:55.377974   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:55.378039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:55.418786   70908 cri.go:89] found id: ""
	I0311 21:37:55.418814   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.418822   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:55.418827   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:55.418883   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:55.461566   70908 cri.go:89] found id: ""
	I0311 21:37:55.461586   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.461593   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:55.461601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:55.461655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:55.502917   70908 cri.go:89] found id: ""
	I0311 21:37:55.502945   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.502955   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:55.502962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:55.503022   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:55.551417   70908 cri.go:89] found id: ""
	I0311 21:37:55.551441   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.551454   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:55.551462   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:55.551514   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:55.596060   70908 cri.go:89] found id: ""
	I0311 21:37:55.596092   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.596103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:55.596111   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:55.596172   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:55.635495   70908 cri.go:89] found id: ""
	I0311 21:37:55.635523   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.635535   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:55.635547   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:55.635564   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:55.691705   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:55.691735   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.707696   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:55.707718   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:55.780432   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:55.780452   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:55.780465   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:55.866033   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:55.866067   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:58.437299   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:58.453058   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:58.453125   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:58.493317   70908 cri.go:89] found id: ""
	I0311 21:37:58.493339   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.493347   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:58.493353   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:58.493408   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:58.543533   70908 cri.go:89] found id: ""
	I0311 21:37:58.543556   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.543567   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:58.543578   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:58.543634   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:58.585255   70908 cri.go:89] found id: ""
	I0311 21:37:58.585282   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.585292   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:58.585300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:58.585359   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:58.622393   70908 cri.go:89] found id: ""
	I0311 21:37:58.622421   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.622428   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:58.622434   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:58.622501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:58.661939   70908 cri.go:89] found id: ""
	I0311 21:37:58.661963   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.661971   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:58.661977   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:58.662034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:58.703628   70908 cri.go:89] found id: ""
	I0311 21:37:58.703663   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.703674   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:58.703682   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:58.703743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:58.742553   70908 cri.go:89] found id: ""
	I0311 21:37:58.742583   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.742594   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:58.742601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:58.742662   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:58.785016   70908 cri.go:89] found id: ""
	I0311 21:37:58.785040   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.785047   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:58.785055   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:58.785071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:58.857757   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:58.857773   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:58.857786   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:58.946120   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:58.946148   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:58.996288   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:58.996328   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:59.055371   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:59.055407   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:01.571092   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:01.591149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:01.591238   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:01.629156   70908 cri.go:89] found id: ""
	I0311 21:38:01.629184   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.629196   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:01.629203   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:01.629261   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:01.673656   70908 cri.go:89] found id: ""
	I0311 21:38:01.673680   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.673687   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:01.673692   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:01.673739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:01.713361   70908 cri.go:89] found id: ""
	I0311 21:38:01.713389   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.713397   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:01.713403   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:01.713450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:01.757256   70908 cri.go:89] found id: ""
	I0311 21:38:01.757286   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.757298   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:01.757305   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:01.757362   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:01.797538   70908 cri.go:89] found id: ""
	I0311 21:38:01.797565   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.797573   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:01.797580   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:01.797635   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:01.838664   70908 cri.go:89] found id: ""
	I0311 21:38:01.838692   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.838701   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:01.838707   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:01.838754   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:01.893638   70908 cri.go:89] found id: ""
	I0311 21:38:01.893668   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.893679   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:01.893686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:01.893747   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:01.935547   70908 cri.go:89] found id: ""
	I0311 21:38:01.935569   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.935577   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:01.935585   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:01.935596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:01.989964   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:01.989988   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:02.004949   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:02.004973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:02.082006   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:02.082024   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:02.082041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:02.171040   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:02.171072   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:04.724699   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:04.741445   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:04.741512   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:04.783924   70908 cri.go:89] found id: ""
	I0311 21:38:04.783951   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.783962   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:04.783969   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:04.784028   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:04.825806   70908 cri.go:89] found id: ""
	I0311 21:38:04.825835   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.825845   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:04.825852   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:04.825913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:04.864070   70908 cri.go:89] found id: ""
	I0311 21:38:04.864106   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.864118   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:04.864126   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:04.864181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:04.901735   70908 cri.go:89] found id: ""
	I0311 21:38:04.901759   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.901769   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:04.901777   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:04.901832   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:04.941473   70908 cri.go:89] found id: ""
	I0311 21:38:04.941496   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.941505   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:04.941513   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:04.941569   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:04.993132   70908 cri.go:89] found id: ""
	I0311 21:38:04.993162   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.993170   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:04.993178   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:04.993237   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:05.037925   70908 cri.go:89] found id: ""
	I0311 21:38:05.037950   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.037960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:05.037967   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:05.038026   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:05.080726   70908 cri.go:89] found id: ""
	I0311 21:38:05.080773   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.080784   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:05.080794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:05.080806   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:05.138205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:05.138233   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:05.155048   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:05.155071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:05.233067   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:05.233086   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:05.233099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:05.317897   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:05.317928   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:07.863484   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:07.877342   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:07.877411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:07.916352   70908 cri.go:89] found id: ""
	I0311 21:38:07.916374   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.916383   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:07.916391   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:07.916454   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:07.954833   70908 cri.go:89] found id: ""
	I0311 21:38:07.954854   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.954863   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:07.954870   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:07.954926   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:07.993124   70908 cri.go:89] found id: ""
	I0311 21:38:07.993152   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.993161   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:07.993168   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:07.993232   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:08.039081   70908 cri.go:89] found id: ""
	I0311 21:38:08.039108   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.039118   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:08.039125   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:08.039191   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:08.084627   70908 cri.go:89] found id: ""
	I0311 21:38:08.084650   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.084658   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:08.084665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:08.084712   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:08.125986   70908 cri.go:89] found id: ""
	I0311 21:38:08.126015   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.126026   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:08.126034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:08.126080   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:08.167149   70908 cri.go:89] found id: ""
	I0311 21:38:08.167176   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.167188   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:08.167193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:08.167252   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:08.204988   70908 cri.go:89] found id: ""
	I0311 21:38:08.205012   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.205020   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:08.205028   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:08.205043   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:08.295226   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:08.295268   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:08.357789   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:08.357820   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:08.434091   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:08.434132   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:08.455208   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:08.455240   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:08.529620   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.030060   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:11.044303   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:11.046353   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:11.088067   70908 cri.go:89] found id: ""
	I0311 21:38:11.088099   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.088110   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:11.088117   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:11.088177   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:11.131077   70908 cri.go:89] found id: ""
	I0311 21:38:11.131104   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.131114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:11.131121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:11.131181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:11.172409   70908 cri.go:89] found id: ""
	I0311 21:38:11.172431   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.172439   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:11.172444   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:11.172496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:11.216775   70908 cri.go:89] found id: ""
	I0311 21:38:11.216817   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.216825   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:11.216830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:11.216886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:11.255105   70908 cri.go:89] found id: ""
	I0311 21:38:11.255129   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.255137   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:11.255142   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:11.255205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:11.292397   70908 cri.go:89] found id: ""
	I0311 21:38:11.292429   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.292440   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:11.292448   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:11.292518   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:11.330376   70908 cri.go:89] found id: ""
	I0311 21:38:11.330397   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.330408   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:11.330415   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:11.330476   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:11.367699   70908 cri.go:89] found id: ""
	I0311 21:38:11.367727   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.367737   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:11.367748   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:11.367763   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:11.421847   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:11.421876   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:11.437570   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:11.437593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:11.522084   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.522108   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:11.522123   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:11.606181   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:11.606228   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.153952   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:14.175726   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:14.175798   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:14.221752   70908 cri.go:89] found id: ""
	I0311 21:38:14.221784   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.221798   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:14.221807   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:14.221895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:14.286690   70908 cri.go:89] found id: ""
	I0311 21:38:14.286720   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.286740   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:14.286757   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:14.286824   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:14.343764   70908 cri.go:89] found id: ""
	I0311 21:38:14.343790   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.343799   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:14.343806   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:14.343876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:14.381198   70908 cri.go:89] found id: ""
	I0311 21:38:14.381220   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.381230   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:14.381237   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:14.381307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:14.421578   70908 cri.go:89] found id: ""
	I0311 21:38:14.421603   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.421613   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:14.421620   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:14.421678   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:14.462945   70908 cri.go:89] found id: ""
	I0311 21:38:14.462972   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.462982   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:14.462990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:14.463049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:14.503503   70908 cri.go:89] found id: ""
	I0311 21:38:14.503532   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.503543   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:14.503550   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:14.503610   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:14.543987   70908 cri.go:89] found id: ""
	I0311 21:38:14.544021   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.544034   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:14.544045   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:14.544062   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:14.624781   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:14.624804   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:14.624821   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:14.707130   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:14.707161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.750815   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:14.750848   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:14.806855   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:14.806882   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:17.325267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:17.340421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:17.340483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:17.382808   70908 cri.go:89] found id: ""
	I0311 21:38:17.382831   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.382841   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:17.382849   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:17.382906   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:17.424838   70908 cri.go:89] found id: ""
	I0311 21:38:17.424865   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.424875   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:17.424883   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:17.424940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:17.466298   70908 cri.go:89] found id: ""
	I0311 21:38:17.466320   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.466327   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:17.466333   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:17.466397   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:17.506648   70908 cri.go:89] found id: ""
	I0311 21:38:17.506678   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.506685   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:17.506691   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:17.506739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:17.544019   70908 cri.go:89] found id: ""
	I0311 21:38:17.544048   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.544057   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:17.544067   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:17.544154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:17.583691   70908 cri.go:89] found id: ""
	I0311 21:38:17.583710   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.583717   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:17.583723   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:17.583768   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:17.624432   70908 cri.go:89] found id: ""
	I0311 21:38:17.624453   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.624460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:17.624466   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:17.624516   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:17.663253   70908 cri.go:89] found id: ""
	I0311 21:38:17.663294   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.663312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:17.663322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:17.663339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:17.749928   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:17.749962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:17.792817   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:17.792853   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:17.847391   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:17.847419   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:17.862813   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:17.862835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:17.935307   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.435995   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:20.452441   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:20.452510   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:20.491960   70908 cri.go:89] found id: ""
	I0311 21:38:20.491985   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.491992   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:20.491998   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:20.492045   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:20.531679   70908 cri.go:89] found id: ""
	I0311 21:38:20.531700   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.531707   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:20.531712   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:20.531764   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:20.571666   70908 cri.go:89] found id: ""
	I0311 21:38:20.571687   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.571694   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:20.571699   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:20.571762   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:20.611165   70908 cri.go:89] found id: ""
	I0311 21:38:20.611187   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.611194   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:20.611199   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:20.611248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:20.648680   70908 cri.go:89] found id: ""
	I0311 21:38:20.648709   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.648720   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:20.648728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:20.648801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:20.690177   70908 cri.go:89] found id: ""
	I0311 21:38:20.690204   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.690215   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:20.690222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:20.690298   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:20.728918   70908 cri.go:89] found id: ""
	I0311 21:38:20.728949   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.728960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:20.728968   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:20.729039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:20.773559   70908 cri.go:89] found id: ""
	I0311 21:38:20.773586   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.773596   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:20.773607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:20.773623   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:20.788709   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:20.788750   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:20.869832   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.869856   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:20.869868   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:20.963515   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:20.963544   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:21.007029   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:21.007055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:23.566134   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:23.583855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:23.583911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:23.623605   70908 cri.go:89] found id: ""
	I0311 21:38:23.623633   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.623656   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:23.623664   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:23.623719   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:23.663058   70908 cri.go:89] found id: ""
	I0311 21:38:23.663081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.663091   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:23.663098   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:23.663157   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:23.701930   70908 cri.go:89] found id: ""
	I0311 21:38:23.701963   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.701975   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:23.701985   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:23.702049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:23.743925   70908 cri.go:89] found id: ""
	I0311 21:38:23.743955   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.743964   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:23.743970   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:23.744046   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:23.784030   70908 cri.go:89] found id: ""
	I0311 21:38:23.784055   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.784066   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:23.784073   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:23.784132   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:23.823054   70908 cri.go:89] found id: ""
	I0311 21:38:23.823081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.823089   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:23.823097   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:23.823156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:23.863629   70908 cri.go:89] found id: ""
	I0311 21:38:23.863654   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.863662   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:23.863668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:23.863724   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:23.904429   70908 cri.go:89] found id: ""
	I0311 21:38:23.904454   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.904462   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:23.904470   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:23.904481   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:23.962356   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:23.962393   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:23.977667   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:23.977689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:24.068791   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:24.068820   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:24.068835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:24.157857   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:24.157892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:26.705872   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:26.720840   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:26.720936   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:26.766449   70908 cri.go:89] found id: ""
	I0311 21:38:26.766480   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.766490   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:26.766496   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:26.766557   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:26.806179   70908 cri.go:89] found id: ""
	I0311 21:38:26.806203   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.806210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:26.806216   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:26.806275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:26.850737   70908 cri.go:89] found id: ""
	I0311 21:38:26.850765   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.850775   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:26.850785   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:26.850845   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:26.897694   70908 cri.go:89] found id: ""
	I0311 21:38:26.897722   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.897733   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:26.897744   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:26.897802   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:26.940940   70908 cri.go:89] found id: ""
	I0311 21:38:26.940962   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.940969   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:26.940975   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:26.941021   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:26.978576   70908 cri.go:89] found id: ""
	I0311 21:38:26.978604   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.978614   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:26.978625   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:26.978682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:27.016331   70908 cri.go:89] found id: ""
	I0311 21:38:27.016363   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.016374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:27.016381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:27.016439   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:27.061541   70908 cri.go:89] found id: ""
	I0311 21:38:27.061569   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.061580   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:27.061590   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:27.061609   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:27.154977   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:27.155017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:27.204458   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:27.204488   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:27.259960   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:27.259997   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:27.277806   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:27.277832   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:27.356111   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:29.856828   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:29.871331   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:29.871413   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:29.912867   70908 cri.go:89] found id: ""
	I0311 21:38:29.912895   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.912904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:29.912910   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:29.912973   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:29.953458   70908 cri.go:89] found id: ""
	I0311 21:38:29.953483   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.953491   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:29.953497   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:29.953553   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:29.997873   70908 cri.go:89] found id: ""
	I0311 21:38:29.997904   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.997912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:29.997921   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:29.997983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:30.038831   70908 cri.go:89] found id: ""
	I0311 21:38:30.038861   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.038872   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:30.038880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:30.038940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:30.082089   70908 cri.go:89] found id: ""
	I0311 21:38:30.082117   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.082127   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:30.082135   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:30.082213   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:30.121167   70908 cri.go:89] found id: ""
	I0311 21:38:30.121198   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.121209   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:30.121216   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:30.121274   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:30.162342   70908 cri.go:89] found id: ""
	I0311 21:38:30.162371   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.162380   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:30.162393   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:30.162452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:30.201727   70908 cri.go:89] found id: ""
	I0311 21:38:30.201753   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.201761   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:30.201769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:30.201780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:30.283314   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:30.283346   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:30.333900   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:30.333930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:30.391761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:30.391798   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:30.407907   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:30.407930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:30.489560   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:32.989976   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:33.004724   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:33.004814   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:33.049701   70908 cri.go:89] found id: ""
	I0311 21:38:33.049733   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.049743   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:33.049753   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:33.049823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:33.097759   70908 cri.go:89] found id: ""
	I0311 21:38:33.097792   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.097804   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:33.097811   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:33.097875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:33.143257   70908 cri.go:89] found id: ""
	I0311 21:38:33.143291   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.143300   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:33.143308   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:33.143376   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:33.187434   70908 cri.go:89] found id: ""
	I0311 21:38:33.187464   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.187477   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:33.187483   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:33.187558   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:33.236201   70908 cri.go:89] found id: ""
	I0311 21:38:33.236230   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.236239   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:33.236245   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:33.236312   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:33.279710   70908 cri.go:89] found id: ""
	I0311 21:38:33.279783   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.279816   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:33.279830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:33.279898   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:33.325022   70908 cri.go:89] found id: ""
	I0311 21:38:33.325053   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.325064   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:33.325072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:33.325138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:33.368588   70908 cri.go:89] found id: ""
	I0311 21:38:33.368614   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.368622   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:33.368629   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:33.368640   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:33.427761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:33.427801   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:33.444440   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:33.444472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:33.527745   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:33.527764   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:33.527775   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:33.608215   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:33.608248   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:36.158253   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:36.172370   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:36.172438   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:36.216905   70908 cri.go:89] found id: ""
	I0311 21:38:36.216935   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.216945   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:36.216951   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:36.216996   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:36.260844   70908 cri.go:89] found id: ""
	I0311 21:38:36.260875   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.260885   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:36.260890   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:36.260941   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:36.306730   70908 cri.go:89] found id: ""
	I0311 21:38:36.306755   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.306767   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:36.306772   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:36.306820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:36.346957   70908 cri.go:89] found id: ""
	I0311 21:38:36.346993   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.347004   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:36.347012   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:36.347082   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:36.392265   70908 cri.go:89] found id: ""
	I0311 21:38:36.392295   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.392306   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:36.392313   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:36.392379   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:36.433383   70908 cri.go:89] found id: ""
	I0311 21:38:36.433407   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.433414   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:36.433421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:36.433467   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:36.471291   70908 cri.go:89] found id: ""
	I0311 21:38:36.471325   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.471336   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:36.471344   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:36.471411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:36.514662   70908 cri.go:89] found id: ""
	I0311 21:38:36.514688   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.514698   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:36.514708   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:36.514722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:36.533222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:36.533251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:36.616359   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:36.616384   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:36.616400   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:36.719105   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:36.719137   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:36.771125   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:36.771156   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.324847   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:39.341149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:39.341218   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:39.380284   70908 cri.go:89] found id: ""
	I0311 21:38:39.380324   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.380335   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:39.380343   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:39.380407   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:39.429860   70908 cri.go:89] found id: ""
	I0311 21:38:39.429886   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.429894   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:39.429899   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:39.429960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:39.468089   70908 cri.go:89] found id: ""
	I0311 21:38:39.468113   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.468121   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:39.468127   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:39.468188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:39.508589   70908 cri.go:89] found id: ""
	I0311 21:38:39.508617   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.508628   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:39.508636   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:39.508695   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:39.552427   70908 cri.go:89] found id: ""
	I0311 21:38:39.552451   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.552459   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:39.552464   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:39.552511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:39.592586   70908 cri.go:89] found id: ""
	I0311 21:38:39.592607   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.592615   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:39.592621   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:39.592670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:39.637138   70908 cri.go:89] found id: ""
	I0311 21:38:39.637167   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.637178   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:39.637186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:39.637248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:39.679422   70908 cri.go:89] found id: ""
	I0311 21:38:39.679457   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.679470   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:39.679482   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:39.679499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.734815   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:39.734850   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:39.750448   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:39.750472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:39.832912   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:39.832936   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:39.832951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:39.924020   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:39.924061   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:42.472932   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:42.488034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:42.488090   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:42.530945   70908 cri.go:89] found id: ""
	I0311 21:38:42.530971   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.530981   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:42.530989   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:42.531053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:42.571906   70908 cri.go:89] found id: ""
	I0311 21:38:42.571939   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.571951   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:42.571960   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:42.572029   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:42.613198   70908 cri.go:89] found id: ""
	I0311 21:38:42.613228   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.613239   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:42.613247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:42.613330   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:42.654740   70908 cri.go:89] found id: ""
	I0311 21:38:42.654762   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.654770   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:42.654775   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:42.654821   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:42.694797   70908 cri.go:89] found id: ""
	I0311 21:38:42.694836   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.694847   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:42.694854   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:42.694931   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:42.738918   70908 cri.go:89] found id: ""
	I0311 21:38:42.738946   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.738958   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:42.738965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:42.739032   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:42.780836   70908 cri.go:89] found id: ""
	I0311 21:38:42.780870   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.780881   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:42.780888   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:42.780943   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:42.824672   70908 cri.go:89] found id: ""
	I0311 21:38:42.824701   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.824712   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:42.824721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:42.824747   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:42.877219   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:42.877253   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:42.934996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:42.935033   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:42.952125   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:42.952152   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:43.036657   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:43.036678   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:43.036695   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:45.629959   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:45.648501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:45.648581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:45.690083   70908 cri.go:89] found id: ""
	I0311 21:38:45.690117   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.690128   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:45.690136   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:45.690201   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:45.736497   70908 cri.go:89] found id: ""
	I0311 21:38:45.736519   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.736526   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:45.736531   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:45.736576   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:45.778590   70908 cri.go:89] found id: ""
	I0311 21:38:45.778625   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.778636   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:45.778645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:45.778723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:45.822322   70908 cri.go:89] found id: ""
	I0311 21:38:45.822351   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.822359   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:45.822365   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:45.822419   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:45.868591   70908 cri.go:89] found id: ""
	I0311 21:38:45.868618   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.868627   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:45.868633   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:45.868680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:45.915137   70908 cri.go:89] found id: ""
	I0311 21:38:45.915165   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.915178   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:45.915187   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:45.915258   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:45.960432   70908 cri.go:89] found id: ""
	I0311 21:38:45.960459   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.960469   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:45.960476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:45.960529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:46.006089   70908 cri.go:89] found id: ""
	I0311 21:38:46.006168   70908 logs.go:276] 0 containers: []
	W0311 21:38:46.006185   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:46.006195   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:46.006209   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:46.064257   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:46.064296   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:46.080304   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:46.080337   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:46.177978   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:46.178001   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:46.178017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:46.265260   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:46.265298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:48.814221   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:48.835695   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:48.835793   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:48.898391   70908 cri.go:89] found id: ""
	I0311 21:38:48.898418   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.898429   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:48.898437   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:48.898501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:48.972552   70908 cri.go:89] found id: ""
	I0311 21:38:48.972596   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.972607   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:48.972617   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:48.972684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:49.022346   70908 cri.go:89] found id: ""
	I0311 21:38:49.022371   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.022379   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:49.022384   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:49.022430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:49.078415   70908 cri.go:89] found id: ""
	I0311 21:38:49.078444   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.078455   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:49.078463   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:49.078526   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:49.119369   70908 cri.go:89] found id: ""
	I0311 21:38:49.119402   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.119412   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:49.119420   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:49.119497   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:49.169866   70908 cri.go:89] found id: ""
	I0311 21:38:49.169897   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.169908   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:49.169916   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:49.169978   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:49.223619   70908 cri.go:89] found id: ""
	I0311 21:38:49.223642   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.223650   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:49.223656   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:49.223704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:49.278499   70908 cri.go:89] found id: ""
	I0311 21:38:49.278531   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.278542   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:49.278551   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:49.278563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:49.294734   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.294760   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:49.390223   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:49.390252   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:49.390267   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:49.481214   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.481250   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:49.530285   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:49.530321   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.087848   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:52.108284   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:52.108351   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.161648   70908 cri.go:89] found id: ""
	I0311 21:38:52.161680   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.161691   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:52.161698   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.161763   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.206552   70908 cri.go:89] found id: ""
	I0311 21:38:52.206577   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.206588   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:52.206596   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.206659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.253954   70908 cri.go:89] found id: ""
	I0311 21:38:52.253984   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.253996   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:52.254004   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.254068   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.302343   70908 cri.go:89] found id: ""
	I0311 21:38:52.302384   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.302396   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:52.302404   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.302472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.345581   70908 cri.go:89] found id: ""
	I0311 21:38:52.345608   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.345618   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:52.345624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.345683   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.392502   70908 cri.go:89] found id: ""
	I0311 21:38:52.392531   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.392542   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:52.392549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.392601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.447625   70908 cri.go:89] found id: ""
	I0311 21:38:52.447651   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.447661   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.447668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:52.447728   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:52.490965   70908 cri.go:89] found id: ""
	I0311 21:38:52.490994   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.491007   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:52.491019   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:52.491034   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:52.539604   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.539650   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.597735   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.597771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.617572   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.617610   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:52.706724   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:52.706753   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.706769   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.293550   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:55.313904   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:55.314005   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:55.368607   70908 cri.go:89] found id: ""
	I0311 21:38:55.368639   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.368647   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:55.368654   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:55.368714   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:55.434052   70908 cri.go:89] found id: ""
	I0311 21:38:55.434081   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.434092   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:55.434100   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:55.434189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:55.483532   70908 cri.go:89] found id: ""
	I0311 21:38:55.483562   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.483572   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:55.483579   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:55.483647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:55.528681   70908 cri.go:89] found id: ""
	I0311 21:38:55.528708   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.528721   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:55.528728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:55.528825   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:55.583143   70908 cri.go:89] found id: ""
	I0311 21:38:55.583167   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.583174   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:55.583179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:55.583240   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:55.636577   70908 cri.go:89] found id: ""
	I0311 21:38:55.636599   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.636607   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:55.636612   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:55.636670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:55.697268   70908 cri.go:89] found id: ""
	I0311 21:38:55.697295   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.697306   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:55.697314   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:55.697374   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:55.749272   70908 cri.go:89] found id: ""
	I0311 21:38:55.749302   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.749312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:55.749322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:55.749335   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.841581   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:55.841643   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:55.898537   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:55.898574   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:55.973278   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:55.973329   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:55.992958   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:55.992986   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:56.084193   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:58.584354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:58.604767   70908 kubeadm.go:591] duration metric: took 4m4.440744932s to restartPrimaryControlPlane
	W0311 21:38:58.604844   70908 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:58.604872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:59.965834   70908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.36094005s)
	I0311 21:38:59.965906   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:59.982020   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:38:59.994794   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:00.007116   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:00.007138   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:00.007182   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:00.019744   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:00.019802   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:00.033311   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:00.045608   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:00.045685   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:00.059722   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.071140   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:00.071199   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.082635   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:00.093311   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:00.093374   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:00.104995   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:00.372164   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:40:56.380462   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:40:56.380539   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:40:56.382217   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:56.382264   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:56.382349   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:56.382450   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:56.382619   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:56.382712   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:56.384498   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:56.384579   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:56.384636   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:56.384766   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:56.384863   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:56.384967   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:56.385037   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:56.385139   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:56.385208   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:56.385281   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:56.385357   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:56.385408   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:56.385492   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:56.385567   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:56.385644   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:56.385769   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:56.385855   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:56.385962   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:56.386053   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:56.386104   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:56.386184   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:56.387594   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:56.387671   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:56.387738   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:56.387811   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:56.387914   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:56.388107   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:56.388182   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:40:56.388297   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388522   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388614   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388844   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388914   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389074   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389131   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389314   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389405   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389594   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389603   70908 kubeadm.go:309] 
	I0311 21:40:56.389653   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:40:56.389720   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:40:56.389732   70908 kubeadm.go:309] 
	I0311 21:40:56.389779   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:40:56.389811   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:40:56.389924   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:40:56.389933   70908 kubeadm.go:309] 
	I0311 21:40:56.390058   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:40:56.390109   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:40:56.390150   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:40:56.390159   70908 kubeadm.go:309] 
	I0311 21:40:56.390299   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:40:56.390395   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:40:56.390409   70908 kubeadm.go:309] 
	I0311 21:40:56.390512   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:40:56.390603   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:40:56.390702   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:40:56.390803   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:40:56.390833   70908 kubeadm.go:309] 
	W0311 21:40:56.390936   70908 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:40:56.390995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:40:56.941058   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:56.958276   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:56.970464   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:56.970493   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:56.970552   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:40:56.983314   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:56.983372   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:56.993791   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:40:57.004040   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:57.004098   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:57.014471   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.024751   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:57.024805   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.035389   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:40:57.045511   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:57.045556   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:57.056774   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:57.140620   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:57.140789   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:57.310076   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:57.310193   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:57.310280   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:57.506834   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:57.509261   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:57.509362   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:57.509446   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:57.509576   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:57.509669   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:57.509765   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:57.509839   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:57.509949   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:57.510004   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:57.510109   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:57.510231   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:57.510274   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:57.510361   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:57.585562   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:57.644460   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:57.784382   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:57.848952   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:57.867302   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:57.867791   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:57.867864   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:58.036523   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:58.039051   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:58.039176   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:58.054234   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:58.055548   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:58.057378   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:58.060167   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:41:38.062360   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:41:38.062886   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:38.063137   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:43.063592   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:43.063788   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:53.064505   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:53.064773   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:13.065744   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:13.065995   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.066718   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:53.067030   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.067070   70908 kubeadm.go:309] 
	I0311 21:42:53.067135   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:42:53.067191   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:42:53.067203   70908 kubeadm.go:309] 
	I0311 21:42:53.067259   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:42:53.067318   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:42:53.067456   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:42:53.067466   70908 kubeadm.go:309] 
	I0311 21:42:53.067590   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:42:53.067650   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:42:53.067724   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:42:53.067735   70908 kubeadm.go:309] 
	I0311 21:42:53.067889   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:42:53.068021   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:42:53.068036   70908 kubeadm.go:309] 
	I0311 21:42:53.068169   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:42:53.068297   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:42:53.068412   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:42:53.068512   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:42:53.068523   70908 kubeadm.go:309] 
	I0311 21:42:53.069455   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:42:53.069572   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:42:53.069682   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:42:53.069775   70908 kubeadm.go:393] duration metric: took 7m58.960224884s to StartCluster
	I0311 21:42:53.069833   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:42:53.069899   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:42:53.120459   70908 cri.go:89] found id: ""
	I0311 21:42:53.120486   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.120497   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:42:53.120505   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:42:53.120564   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:42:53.159639   70908 cri.go:89] found id: ""
	I0311 21:42:53.159667   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.159676   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:42:53.159682   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:42:53.159738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:42:53.199584   70908 cri.go:89] found id: ""
	I0311 21:42:53.199607   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.199614   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:42:53.199619   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:42:53.199676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:42:53.238868   70908 cri.go:89] found id: ""
	I0311 21:42:53.238901   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.238908   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:42:53.238917   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:42:53.238963   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:42:53.282172   70908 cri.go:89] found id: ""
	I0311 21:42:53.282205   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.282216   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:42:53.282225   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:42:53.282278   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:42:53.318450   70908 cri.go:89] found id: ""
	I0311 21:42:53.318481   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.318491   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:42:53.318499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:42:53.318559   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:42:53.360887   70908 cri.go:89] found id: ""
	I0311 21:42:53.360913   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.360923   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:42:53.360930   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:42:53.361027   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:42:53.414181   70908 cri.go:89] found id: ""
	I0311 21:42:53.414209   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.414220   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:42:53.414232   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:42:53.414247   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:42:53.478658   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:42:53.478689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:42:53.494577   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:42:53.494604   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:42:53.586460   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:42:53.586483   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:42:53.586500   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:42:53.697218   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:42:53.697251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0311 21:42:53.746291   70908 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:42:53.746336   70908 out.go:239] * 
	* 
	W0311 21:42:53.746388   70908 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.746409   70908 out.go:239] * 
	* 
	W0311 21:42:53.747362   70908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:42:53.750888   70908 out.go:177] 
	W0311 21:42:53.752146   70908 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.752211   70908 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:42:53.752239   70908 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:42:53.753832   70908 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-239315 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 2 (296.095369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-239315 logs -n 25
E0311 21:42:55.427666   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-239315 logs -n 25: (1.516147445s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-427678 sudo cat                              | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo find                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo crio                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-427678                                       | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-124446 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | disable-driver-mounts-124446                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:26 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-766430  | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-324578             | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:30:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:30:01.044166   70908 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:30:01.044254   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044259   70908 out.go:304] Setting ErrFile to fd 2...
	I0311 21:30:01.044263   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044451   70908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:30:01.044970   70908 out.go:298] Setting JSON to false
	I0311 21:30:01.045838   70908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7950,"bootTime":1710184651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:30:01.045894   70908 start.go:139] virtualization: kvm guest
	I0311 21:30:01.048311   70908 out.go:177] * [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:30:01.050003   70908 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:30:01.050011   70908 notify.go:220] Checking for updates...
	I0311 21:30:01.051498   70908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:30:01.052999   70908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:30:01.054439   70908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:30:01.055768   70908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:30:01.057137   70908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:30:01.058760   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:30:01.059167   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.059205   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.073734   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0311 21:30:01.074087   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.074586   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.074618   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.074966   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.075173   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.077005   70908 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 21:30:01.078583   70908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:30:01.078879   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.078914   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.093226   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0311 21:30:01.093614   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.094174   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.094243   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.094616   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.094805   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.128302   70908 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:30:01.129965   70908 start.go:297] selected driver: kvm2
	I0311 21:30:01.129991   70908 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.130113   70908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:30:01.131050   70908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.131115   70908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:30:01.145452   70908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:30:01.145782   70908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:30:01.145811   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:30:01.145819   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:30:01.145863   70908 start.go:340] cluster config:
	{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.145954   70908 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.147725   70908 out.go:177] * Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	I0311 21:30:01.148916   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:30:01.148943   70908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:30:01.148955   70908 cache.go:56] Caching tarball of preloaded images
	I0311 21:30:01.149022   70908 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:30:01.149032   70908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:30:01.149114   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:30:01.149263   70908 start.go:360] acquireMachinesLock for old-k8s-version-239315: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:30:05.352968   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:08.425086   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:14.504922   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:17.577080   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:23.656996   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:26.729009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:32.809042   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:35.881008   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:41.960992   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:45.033096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:51.112925   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:54.184989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:00.265058   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:03.337012   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:09.416960   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:12.489005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:18.569021   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:21.640990   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:27.721019   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:30.793040   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:36.872985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:39.945005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:46.025035   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:49.096988   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:55.176985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:58.249009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:04.328981   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:07.401006   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:13.480986   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:16.552965   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:22.632997   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:25.705064   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:31.784993   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:34.857027   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:40.937002   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:44.008989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:50.088959   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:53.161092   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:59.241045   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:02.313084   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:08.393056   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:11.465079   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:17.545057   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:20.617082   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:26.697000   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:29.768926   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:35.849024   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:38.921096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:41.925305   70458 start.go:364] duration metric: took 4m36.419231792s to acquireMachinesLock for "no-preload-324578"
	I0311 21:33:41.925360   70458 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:33:41.925368   70458 fix.go:54] fixHost starting: 
	I0311 21:33:41.925768   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:33:41.925798   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:33:41.940654   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0311 21:33:41.941130   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:33:41.941619   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:33:41.941646   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:33:41.942045   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:33:41.942209   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:33:41.942370   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:33:41.944009   70458 fix.go:112] recreateIfNeeded on no-preload-324578: state=Stopped err=<nil>
	I0311 21:33:41.944030   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	W0311 21:33:41.944231   70458 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:33:41.946020   70458 out.go:177] * Restarting existing kvm2 VM for "no-preload-324578" ...
	I0311 21:33:41.922711   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:33:41.922754   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923131   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:33:41.923158   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923430   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:33:41.925178   70417 machine.go:97] duration metric: took 4m37.414792129s to provisionDockerMachine
	I0311 21:33:41.925213   70417 fix.go:56] duration metric: took 4m37.435982654s for fixHost
	I0311 21:33:41.925219   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 4m37.436000925s
	W0311 21:33:41.925242   70417 start.go:713] error starting host: provision: host is not running
	W0311 21:33:41.925330   70417 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0311 21:33:41.925343   70417 start.go:728] Will try again in 5 seconds ...
	I0311 21:33:41.947495   70458 main.go:141] libmachine: (no-preload-324578) Calling .Start
	I0311 21:33:41.947676   70458 main.go:141] libmachine: (no-preload-324578) Ensuring networks are active...
	I0311 21:33:41.948386   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network default is active
	I0311 21:33:41.948724   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network mk-no-preload-324578 is active
	I0311 21:33:41.949117   70458 main.go:141] libmachine: (no-preload-324578) Getting domain xml...
	I0311 21:33:41.949876   70458 main.go:141] libmachine: (no-preload-324578) Creating domain...
	I0311 21:33:43.129733   70458 main.go:141] libmachine: (no-preload-324578) Waiting to get IP...
	I0311 21:33:43.130601   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.131006   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.131053   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.130975   71444 retry.go:31] will retry after 209.203314ms: waiting for machine to come up
	I0311 21:33:43.341724   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.342324   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.342361   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.342279   71444 retry.go:31] will retry after 375.396917ms: waiting for machine to come up
	I0311 21:33:43.718906   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.719329   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.719351   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.719288   71444 retry.go:31] will retry after 428.365393ms: waiting for machine to come up
	I0311 21:33:44.148895   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.149334   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.149358   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.149284   71444 retry.go:31] will retry after 561.478535ms: waiting for machine to come up
	I0311 21:33:44.712065   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.712548   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.712576   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.712465   71444 retry.go:31] will retry after 700.993236ms: waiting for machine to come up
	I0311 21:33:46.926379   70417 start.go:360] acquireMachinesLock for default-k8s-diff-port-766430: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:33:45.415695   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:45.416242   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:45.416276   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:45.416215   71444 retry.go:31] will retry after 809.474202ms: waiting for machine to come up
	I0311 21:33:46.227098   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:46.227573   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:46.227608   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:46.227520   71444 retry.go:31] will retry after 1.075187328s: waiting for machine to come up
	I0311 21:33:47.303981   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:47.304454   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:47.304483   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:47.304397   71444 retry.go:31] will retry after 1.145290319s: waiting for machine to come up
	I0311 21:33:48.451871   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:48.452316   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:48.452350   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:48.452267   71444 retry.go:31] will retry after 1.172261063s: waiting for machine to come up
	I0311 21:33:49.626502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:49.627067   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:49.627089   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:49.627023   71444 retry.go:31] will retry after 2.201479026s: waiting for machine to come up
	I0311 21:33:51.831519   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:51.831972   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:51.832008   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:51.831905   71444 retry.go:31] will retry after 2.888101699s: waiting for machine to come up
	I0311 21:33:54.721322   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:54.721753   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:54.721773   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:54.721722   71444 retry.go:31] will retry after 3.512655296s: waiting for machine to come up
	I0311 21:33:58.235767   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:58.236180   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:58.236219   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:58.236141   71444 retry.go:31] will retry after 3.975760652s: waiting for machine to come up
	I0311 21:34:03.525918   70604 start.go:364] duration metric: took 4m44.449252209s to acquireMachinesLock for "embed-certs-743937"
	I0311 21:34:03.525995   70604 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:03.526008   70604 fix.go:54] fixHost starting: 
	I0311 21:34:03.526428   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:03.526470   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:03.542427   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0311 21:34:03.542857   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:03.543292   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:34:03.543317   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:03.543616   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:03.543806   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:03.543991   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:34:03.545366   70604 fix.go:112] recreateIfNeeded on embed-certs-743937: state=Stopped err=<nil>
	I0311 21:34:03.545391   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	W0311 21:34:03.545540   70604 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:03.548158   70604 out.go:177] * Restarting existing kvm2 VM for "embed-certs-743937" ...
	I0311 21:34:03.549803   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Start
	I0311 21:34:03.549966   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring networks are active...
	I0311 21:34:03.550712   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network default is active
	I0311 21:34:03.551124   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network mk-embed-certs-743937 is active
	I0311 21:34:03.551528   70604 main.go:141] libmachine: (embed-certs-743937) Getting domain xml...
	I0311 21:34:03.552226   70604 main.go:141] libmachine: (embed-certs-743937) Creating domain...
	I0311 21:34:02.213709   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214152   70458 main.go:141] libmachine: (no-preload-324578) Found IP for machine: 192.168.39.36
	I0311 21:34:02.214181   70458 main.go:141] libmachine: (no-preload-324578) Reserving static IP address...
	I0311 21:34:02.214196   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has current primary IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214631   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.214655   70458 main.go:141] libmachine: (no-preload-324578) DBG | skip adding static IP to network mk-no-preload-324578 - found existing host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"}
	I0311 21:34:02.214666   70458 main.go:141] libmachine: (no-preload-324578) Reserved static IP address: 192.168.39.36
	I0311 21:34:02.214680   70458 main.go:141] libmachine: (no-preload-324578) Waiting for SSH to be available...
	I0311 21:34:02.214704   70458 main.go:141] libmachine: (no-preload-324578) DBG | Getting to WaitForSSH function...
	I0311 21:34:02.216798   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217068   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.217111   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217285   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH client type: external
	I0311 21:34:02.217316   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa (-rw-------)
	I0311 21:34:02.217356   70458 main.go:141] libmachine: (no-preload-324578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:02.217374   70458 main.go:141] libmachine: (no-preload-324578) DBG | About to run SSH command:
	I0311 21:34:02.217389   70458 main.go:141] libmachine: (no-preload-324578) DBG | exit 0
	I0311 21:34:02.340837   70458 main.go:141] libmachine: (no-preload-324578) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:02.341154   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetConfigRaw
	I0311 21:34:02.341752   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.344368   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344756   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.344791   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344942   70458 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/config.json ...
	I0311 21:34:02.345142   70458 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:02.345159   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:02.345353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.347647   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348001   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.348029   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348118   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.348284   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348432   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.348704   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.348913   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.348925   70458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:02.457273   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:02.457298   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457523   70458 buildroot.go:166] provisioning hostname "no-preload-324578"
	I0311 21:34:02.457554   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457757   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.460347   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460658   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.460688   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460913   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.461126   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461286   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461415   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.461574   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.461758   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.461775   70458 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-324578 && echo "no-preload-324578" | sudo tee /etc/hostname
	I0311 21:34:02.583388   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-324578
	
	I0311 21:34:02.583414   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.586043   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586399   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.586431   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586592   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.586799   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.586957   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.587084   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.587271   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.587433   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.587449   70458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-324578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-324578/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-324578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:02.702365   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:02.702399   70458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:02.702420   70458 buildroot.go:174] setting up certificates
	I0311 21:34:02.702431   70458 provision.go:84] configureAuth start
	I0311 21:34:02.702439   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.702725   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.705459   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.705882   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.705902   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.706048   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.708166   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708476   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.708502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708618   70458 provision.go:143] copyHostCerts
	I0311 21:34:02.708675   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:02.708684   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:02.708764   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:02.708875   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:02.708885   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:02.708911   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:02.708977   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:02.708984   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:02.709005   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:02.709063   70458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.no-preload-324578 san=[127.0.0.1 192.168.39.36 localhost minikube no-preload-324578]
	I0311 21:34:02.823423   70458 provision.go:177] copyRemoteCerts
	I0311 21:34:02.823484   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:02.823508   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.826221   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826538   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.826584   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826743   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.826974   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.827158   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.827344   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:02.912138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:02.938138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:34:02.967391   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:02.992208   70458 provision.go:87] duration metric: took 289.765831ms to configureAuth
	I0311 21:34:02.992232   70458 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:02.992376   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:02.992440   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.994808   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995124   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.995154   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995315   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.995490   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995640   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995818   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.995997   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.996187   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.996202   70458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:03.283611   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:03.283643   70458 machine.go:97] duration metric: took 938.487892ms to provisionDockerMachine
	I0311 21:34:03.283655   70458 start.go:293] postStartSetup for "no-preload-324578" (driver="kvm2")
	I0311 21:34:03.283667   70458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:03.283681   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.284008   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:03.284043   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.286802   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287220   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.287262   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287379   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.287546   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.287731   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.287930   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.372563   70458 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:03.377151   70458 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:03.377171   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:03.377225   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:03.377291   70458 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:03.377377   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:03.387792   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:03.412721   70458 start.go:296] duration metric: took 129.055927ms for postStartSetup
	I0311 21:34:03.412770   70458 fix.go:56] duration metric: took 21.487401487s for fixHost
	I0311 21:34:03.412790   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.415209   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415507   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.415533   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415668   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.415866   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416035   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416179   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.416353   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:03.416502   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:03.416513   70458 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:03.525759   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192843.475283818
	
	I0311 21:34:03.525781   70458 fix.go:216] guest clock: 1710192843.475283818
	I0311 21:34:03.525790   70458 fix.go:229] Guest: 2024-03-11 21:34:03.475283818 +0000 UTC Remote: 2024-03-11 21:34:03.412775346 +0000 UTC m=+298.052241307 (delta=62.508472ms)
	I0311 21:34:03.525815   70458 fix.go:200] guest clock delta is within tolerance: 62.508472ms
	I0311 21:34:03.525833   70458 start.go:83] releasing machines lock for "no-preload-324578", held for 21.600490138s
	I0311 21:34:03.525866   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.526157   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:03.528771   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529117   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.529143   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529272   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529721   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529897   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529978   70458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:03.530022   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.530124   70458 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:03.530151   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.532450   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532624   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532813   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.532843   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533001   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533010   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.533034   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533171   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533197   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533350   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533504   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533506   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.533639   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.614855   70458 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:03.638835   70458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:03.787832   70458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:03.794627   70458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:03.794677   70458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:03.811771   70458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:03.811790   70458 start.go:494] detecting cgroup driver to use...
	I0311 21:34:03.811845   70458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:03.829561   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:03.844536   70458 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:03.844582   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:03.859811   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:03.875041   70458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:03.991456   70458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:04.174783   70458 docker.go:233] disabling docker service ...
	I0311 21:34:04.174848   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:04.192524   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:04.206906   70458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:04.340047   70458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:04.455686   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:04.472512   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:04.495487   70458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:04.495550   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.506921   70458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:04.506997   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.519408   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.531418   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.543684   70458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:04.555846   70458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:04.567610   70458 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:04.567658   70458 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:04.583015   70458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:04.594515   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:04.715185   70458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:04.872750   70458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:04.872848   70458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:04.878207   70458 start.go:562] Will wait 60s for crictl version
	I0311 21:34:04.878250   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.882436   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:04.921007   70458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:04.921079   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.959326   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.997595   70458 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:34:04.999092   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:05.002092   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002526   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:05.002566   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002790   70458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:05.007758   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:05.023330   70458 kubeadm.go:877] updating cluster {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:05.023430   70458 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:34:05.023461   70458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:05.063043   70458 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:34:05.063071   70458 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:05.063161   70458 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.063170   70458 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.063183   70458 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.063190   70458 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.063233   70458 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0311 21:34:05.063171   70458 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.063272   70458 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.063307   70458 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065013   70458 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.065019   70458 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.065020   70458 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.065045   70458 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0311 21:34:05.065017   70458 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.065018   70458 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065064   70458 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.065365   70458 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.209182   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.211431   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.220663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.230965   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.237859   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0311 21:34:05.260820   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.288596   70458 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0311 21:34:05.288651   70458 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.288697   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.324896   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.342987   70458 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0311 21:34:05.343030   70458 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.343080   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.371663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.377262   70458 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0311 21:34:05.377306   70458 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.377349   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.792889   70604 main.go:141] libmachine: (embed-certs-743937) Waiting to get IP...
	I0311 21:34:04.793678   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:04.794097   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:04.794152   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:04.794064   71579 retry.go:31] will retry after 281.522937ms: waiting for machine to come up
	I0311 21:34:05.077518   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.077856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.077889   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.077814   71579 retry.go:31] will retry after 303.836522ms: waiting for machine to come up
	I0311 21:34:05.383244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.383796   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.383839   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.383758   71579 retry.go:31] will retry after 333.172379ms: waiting for machine to come up
	I0311 21:34:05.718117   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.718603   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.718630   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.718562   71579 retry.go:31] will retry after 469.046827ms: waiting for machine to come up
	I0311 21:34:06.189304   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.189748   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.189777   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.189705   71579 retry.go:31] will retry after 636.781259ms: waiting for machine to come up
	I0311 21:34:06.828672   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.829136   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.829174   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.829078   71579 retry.go:31] will retry after 758.609427ms: waiting for machine to come up
	I0311 21:34:07.589134   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:07.589490   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:07.589513   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:07.589466   71579 retry.go:31] will retry after 990.575872ms: waiting for machine to come up
	I0311 21:34:08.581971   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:08.582312   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:08.582344   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:08.582290   71579 retry.go:31] will retry after 1.142377902s: waiting for machine to come up
	I0311 21:34:05.421288   70458 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0311 21:34:05.421340   70458 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.421390   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473450   70458 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0311 21:34:05.473497   70458 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.473527   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.473545   70458 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0311 21:34:05.473584   70458 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.473603   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.473639   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473663   70458 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0311 21:34:05.473701   70458 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.473707   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.473730   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473766   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.569510   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.569615   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.578915   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0311 21:34:05.578979   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579007   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:05.579029   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.579077   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579117   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.579158   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.579209   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0311 21:34:05.579272   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:05.584413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0311 21:34:05.584425   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.584458   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.679191   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 21:34:05.679259   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0311 21:34:05.679288   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:05.679337   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679368   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0311 21:34:05.679369   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0311 21:34:05.679414   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679428   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0311 21:34:05.679485   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:07.621341   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.942028932s)
	I0311 21:34:07.621382   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0311 21:34:07.621385   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.941873405s)
	I0311 21:34:07.621413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621424   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (1.941989707s)
	I0311 21:34:07.621452   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621544   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.037072472s)
	I0311 21:34:07.621558   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0311 21:34:07.621580   70458 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:07.621627   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:09.726761   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:09.727207   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:09.727241   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:09.727153   71579 retry.go:31] will retry after 1.17092616s: waiting for machine to come up
	I0311 21:34:10.899311   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:10.899656   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:10.899675   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:10.899631   71579 retry.go:31] will retry after 1.870900402s: waiting for machine to come up
	I0311 21:34:12.771931   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:12.772421   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:12.772457   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:12.772375   71579 retry.go:31] will retry after 2.721804623s: waiting for machine to come up
	I0311 21:34:11.524646   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.902991705s)
	I0311 21:34:11.524683   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0311 21:34:11.524711   70458 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:11.524787   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:13.704750   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.179921724s)
	I0311 21:34:13.704786   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0311 21:34:13.704817   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:13.704868   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:15.496186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:15.496686   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:15.496722   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:15.496627   71579 retry.go:31] will retry after 2.568850361s: waiting for machine to come up
	I0311 21:34:18.068470   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:18.068926   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:18.068959   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:18.068872   71579 retry.go:31] will retry after 4.111366971s: waiting for machine to come up
	I0311 21:34:16.267427   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.562528088s)
	I0311 21:34:16.267458   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0311 21:34:16.267486   70458 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:16.267535   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:17.218029   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 21:34:17.218065   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:17.218104   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:18.987120   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.768996335s)
	I0311 21:34:18.987149   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0311 21:34:18.987167   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:18.987219   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:23.543571   70908 start.go:364] duration metric: took 4m22.394278247s to acquireMachinesLock for "old-k8s-version-239315"
	I0311 21:34:23.543649   70908 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:23.543661   70908 fix.go:54] fixHost starting: 
	I0311 21:34:23.544084   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:23.544139   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:23.561669   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0311 21:34:23.562158   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:23.562618   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:34:23.562645   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:23.562949   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:23.563114   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:23.563306   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetState
	I0311 21:34:23.565152   70908 fix.go:112] recreateIfNeeded on old-k8s-version-239315: state=Stopped err=<nil>
	I0311 21:34:23.565178   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	W0311 21:34:23.565351   70908 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:23.567943   70908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239315" ...
	I0311 21:34:22.182707   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183200   70604 main.go:141] libmachine: (embed-certs-743937) Found IP for machine: 192.168.50.114
	I0311 21:34:22.183228   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has current primary IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183238   70604 main.go:141] libmachine: (embed-certs-743937) Reserving static IP address...
	I0311 21:34:22.183694   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.183716   70604 main.go:141] libmachine: (embed-certs-743937) DBG | skip adding static IP to network mk-embed-certs-743937 - found existing host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"}
	I0311 21:34:22.183728   70604 main.go:141] libmachine: (embed-certs-743937) Reserved static IP address: 192.168.50.114
	I0311 21:34:22.183746   70604 main.go:141] libmachine: (embed-certs-743937) Waiting for SSH to be available...
	I0311 21:34:22.183760   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Getting to WaitForSSH function...
	I0311 21:34:22.185820   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186157   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.186193   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186285   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH client type: external
	I0311 21:34:22.186317   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa (-rw-------)
	I0311 21:34:22.186349   70604 main.go:141] libmachine: (embed-certs-743937) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:22.186368   70604 main.go:141] libmachine: (embed-certs-743937) DBG | About to run SSH command:
	I0311 21:34:22.186384   70604 main.go:141] libmachine: (embed-certs-743937) DBG | exit 0
	I0311 21:34:22.313253   70604 main.go:141] libmachine: (embed-certs-743937) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:22.313570   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetConfigRaw
	I0311 21:34:22.314271   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.317040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317404   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.317509   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317641   70604 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/config.json ...
	I0311 21:34:22.317814   70604 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:22.317830   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:22.318049   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.320550   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320833   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.320859   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320992   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.321223   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321405   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321547   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.321708   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.321930   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.321944   70604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:22.430028   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:22.430055   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430345   70604 buildroot.go:166] provisioning hostname "embed-certs-743937"
	I0311 21:34:22.430374   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430568   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.433555   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.433884   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.433907   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.434102   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.434311   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434474   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434611   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.434762   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.434936   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.434954   70604 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-743937 && echo "embed-certs-743937" | sudo tee /etc/hostname
	I0311 21:34:22.564819   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-743937
	
	I0311 21:34:22.564848   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.567667   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568075   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.568122   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568325   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.568519   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568719   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568913   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.569094   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.569335   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.569361   70604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-743937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-743937/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-743937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:22.684397   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:22.684425   70604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:22.684473   70604 buildroot.go:174] setting up certificates
	I0311 21:34:22.684490   70604 provision.go:84] configureAuth start
	I0311 21:34:22.684507   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.684840   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.687805   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688156   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.688178   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688401   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.690975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691302   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.691321   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691469   70604 provision.go:143] copyHostCerts
	I0311 21:34:22.691528   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:22.691540   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:22.691598   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:22.691690   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:22.691706   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:22.691729   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:22.691834   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:22.691850   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:22.691878   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:22.691946   70604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.embed-certs-743937 san=[127.0.0.1 192.168.50.114 embed-certs-743937 localhost minikube]
	I0311 21:34:22.838395   70604 provision.go:177] copyRemoteCerts
	I0311 21:34:22.838452   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:22.838478   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.840975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841308   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.841342   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841487   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.841684   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.841834   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.841968   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:22.924202   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:22.956079   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 21:34:22.982352   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 21:34:23.008286   70604 provision.go:87] duration metric: took 323.780619ms to configureAuth
	I0311 21:34:23.008311   70604 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:23.008481   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:34:23.008553   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.011128   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011439   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.011461   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011632   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.011780   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.011919   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.012094   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.012278   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.012436   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.012452   70604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:23.288122   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:23.288146   70604 machine.go:97] duration metric: took 970.321311ms to provisionDockerMachine
	I0311 21:34:23.288157   70604 start.go:293] postStartSetup for "embed-certs-743937" (driver="kvm2")
	I0311 21:34:23.288167   70604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:23.288180   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.288496   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:23.288532   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.291434   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.291823   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.291856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.292079   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.292297   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.292468   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.292629   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.376367   70604 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:23.381629   70604 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:23.381660   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:23.381754   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:23.381855   70604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:23.381967   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:23.392280   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:23.423241   70604 start.go:296] duration metric: took 135.071082ms for postStartSetup
	I0311 21:34:23.423283   70604 fix.go:56] duration metric: took 19.897275281s for fixHost
	I0311 21:34:23.423310   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.426264   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426623   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.426652   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426862   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.427052   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427256   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427419   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.427575   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.427809   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.427822   70604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:23.543425   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192863.499269756
	
	I0311 21:34:23.543447   70604 fix.go:216] guest clock: 1710192863.499269756
	I0311 21:34:23.543454   70604 fix.go:229] Guest: 2024-03-11 21:34:23.499269756 +0000 UTC Remote: 2024-03-11 21:34:23.423289031 +0000 UTC m=+304.494814333 (delta=75.980725ms)
	I0311 21:34:23.543472   70604 fix.go:200] guest clock delta is within tolerance: 75.980725ms
	I0311 21:34:23.543478   70604 start.go:83] releasing machines lock for "embed-certs-743937", held for 20.0175167s
	I0311 21:34:23.543504   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.543746   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:23.546763   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547188   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.547223   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547396   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.547882   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548077   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548163   70604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:23.548226   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.548282   70604 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:23.548309   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.551186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551485   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551609   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.551642   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551795   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.551979   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.552001   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.552035   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552146   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.552211   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552277   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552368   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.552501   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552666   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.660064   70604 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:23.668731   70604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:23.831784   70604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:23.840331   70604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:23.840396   70604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:23.864730   70604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:23.864766   70604 start.go:494] detecting cgroup driver to use...
	I0311 21:34:23.864831   70604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:23.886072   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:23.901660   70604 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:23.901727   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:23.917374   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:23.932525   70604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:24.066368   70604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:24.222425   70604 docker.go:233] disabling docker service ...
	I0311 21:34:24.222487   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:24.240937   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:24.257050   70604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:24.395003   70604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:24.550709   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:24.572524   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:24.599710   70604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:24.599776   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.612426   70604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:24.612514   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.626989   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.639576   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.653711   70604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:24.673581   70604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:24.684772   70604 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:24.684841   70604 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:24.707855   70604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:24.719801   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:24.904788   70604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:25.063437   70604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:25.063511   70604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:25.070294   70604 start.go:562] Will wait 60s for crictl version
	I0311 21:34:25.070352   70604 ssh_runner.go:195] Run: which crictl
	I0311 21:34:25.074945   70604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:25.121979   70604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:25.122070   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.159092   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.207391   70604 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:34:21.469205   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.481954559s)
	I0311 21:34:21.469242   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0311 21:34:21.469285   70458 cache_images.go:123] Successfully loaded all cached images
	I0311 21:34:21.469295   70458 cache_images.go:92] duration metric: took 16.40620232s to LoadCachedImages
	I0311 21:34:21.469306   70458 kubeadm.go:928] updating node { 192.168.39.36 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:34:21.469436   70458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-324578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:21.469513   70458 ssh_runner.go:195] Run: crio config
	I0311 21:34:21.531635   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:21.531659   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:21.531671   70458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:21.531690   70458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-324578 NodeName:no-preload-324578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:21.531820   70458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-324578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:21.531876   70458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:34:21.546000   70458 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:21.546060   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:21.558818   70458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0311 21:34:21.577685   70458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:34:21.595960   70458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0311 21:34:21.615003   70458 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:21.619290   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:21.633307   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:21.751586   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:21.771672   70458 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578 for IP: 192.168.39.36
	I0311 21:34:21.771698   70458 certs.go:194] generating shared ca certs ...
	I0311 21:34:21.771717   70458 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:21.771907   70458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:21.771975   70458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:21.771987   70458 certs.go:256] generating profile certs ...
	I0311 21:34:21.772093   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/client.key
	I0311 21:34:21.772190   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key.681a9200
	I0311 21:34:21.772244   70458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key
	I0311 21:34:21.772371   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:21.772421   70458 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:21.772435   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:21.772475   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:21.772509   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:21.772542   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:21.772606   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:21.773241   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:21.833566   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:21.868156   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:21.910118   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:21.952222   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:34:21.988148   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:22.018493   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:22.045225   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:22.071481   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:22.097525   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:22.123425   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:22.156613   70458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:22.174679   70458 ssh_runner.go:195] Run: openssl version
	I0311 21:34:22.181137   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:22.197490   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203508   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203556   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.210822   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:22.224269   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:22.237282   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242762   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242816   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.249334   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:22.261866   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:22.273674   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279004   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279055   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.285394   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:22.299493   70458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:22.304827   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:22.311349   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:22.318377   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:22.325621   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:22.332316   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:22.338893   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:22.345167   70458 kubeadm.go:391] StartCluster: {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:22.345246   70458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:22.345286   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.386703   70458 cri.go:89] found id: ""
	I0311 21:34:22.386785   70458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:22.398475   70458 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:22.398494   70458 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:22.398500   70458 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:22.398558   70458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:22.409434   70458 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:22.410675   70458 kubeconfig.go:125] found "no-preload-324578" server: "https://192.168.39.36:8443"
	I0311 21:34:22.412906   70458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:22.423677   70458 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.36
	I0311 21:34:22.423708   70458 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:22.423719   70458 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:22.423762   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.472548   70458 cri.go:89] found id: ""
	I0311 21:34:22.472615   70458 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:22.494701   70458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:22.506944   70458 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:22.506964   70458 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:22.507015   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:22.517468   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:22.517521   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:22.528281   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:22.538496   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:22.538533   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:22.553009   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.566120   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:22.566189   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.579239   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:22.590180   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:22.590227   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:22.602988   70458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:22.615631   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:22.730568   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.355205   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.588923   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.694870   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.796820   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:23.796918   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.297341   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.797197   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.840030   70458 api_server.go:72] duration metric: took 1.043209284s to wait for apiserver process to appear ...
	I0311 21:34:24.840062   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:24.840101   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:24.840560   70458 api_server.go:269] stopped: https://192.168.39.36:8443/healthz: Get "https://192.168.39.36:8443/healthz": dial tcp 192.168.39.36:8443: connect: connection refused
	I0311 21:34:25.341161   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:23.569356   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .Start
	I0311 21:34:23.569527   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring networks are active...
	I0311 21:34:23.570188   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network default is active
	I0311 21:34:23.570613   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network mk-old-k8s-version-239315 is active
	I0311 21:34:23.571070   70908 main.go:141] libmachine: (old-k8s-version-239315) Getting domain xml...
	I0311 21:34:23.571836   70908 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:34:24.895619   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting to get IP...
	I0311 21:34:24.896680   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:24.897160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:24.897218   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:24.897131   71714 retry.go:31] will retry after 268.563191ms: waiting for machine to come up
	I0311 21:34:25.167783   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.168312   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.168343   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.168268   71714 retry.go:31] will retry after 245.059124ms: waiting for machine to come up
	I0311 21:34:25.414644   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.415139   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.415168   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.415100   71714 retry.go:31] will retry after 407.807793ms: waiting for machine to come up
	I0311 21:34:25.824887   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.825351   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.825379   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.825274   71714 retry.go:31] will retry after 503.187834ms: waiting for machine to come up
	I0311 21:34:25.208819   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:25.211726   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212203   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:25.212244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212486   70604 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:25.217365   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:25.233670   70604 kubeadm.go:877] updating cluster {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:25.233825   70604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:34:25.233886   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:25.282028   70604 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:34:25.282108   70604 ssh_runner.go:195] Run: which lz4
	I0311 21:34:25.287047   70604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:25.291721   70604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:25.291751   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:34:27.414481   70604 crio.go:444] duration metric: took 2.127464595s to copy over tarball
	I0311 21:34:27.414554   70604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:28.225996   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.226031   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.226048   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.285274   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.285307   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.340493   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.512353   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.512409   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:28.840800   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.852523   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.852560   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.341135   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.354997   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:29.355028   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.840769   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.848023   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:34:29.856262   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:34:29.856290   70458 api_server.go:131] duration metric: took 5.016219789s to wait for apiserver health ...
	I0311 21:34:29.856300   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:29.856308   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:29.858297   70458 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:29.859734   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:29.891375   70458 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:29.932393   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:29.959208   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:29.959257   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:29.959268   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:29.959278   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:29.959290   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:29.959296   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:34:29.959303   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:29.959319   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:29.959335   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:34:29.959343   70458 system_pods.go:74] duration metric: took 26.926978ms to wait for pod list to return data ...
	I0311 21:34:29.959355   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:29.963151   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:29.963179   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:29.963193   70458 node_conditions.go:105] duration metric: took 3.825246ms to run NodePressure ...
	I0311 21:34:29.963209   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:26.330005   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:26.330547   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:26.330569   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:26.330464   71714 retry.go:31] will retry after 723.914956ms: waiting for machine to come up
	I0311 21:34:27.056271   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.056879   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.056901   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.056834   71714 retry.go:31] will retry after 693.583075ms: waiting for machine to come up
	I0311 21:34:27.752514   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.752958   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.752980   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.752916   71714 retry.go:31] will retry after 902.247864ms: waiting for machine to come up
	I0311 21:34:28.657551   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:28.658023   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:28.658079   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:28.658008   71714 retry.go:31] will retry after 1.140425887s: waiting for machine to come up
	I0311 21:34:29.800305   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:29.800824   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:29.800852   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:29.800774   71714 retry.go:31] will retry after 1.68593342s: waiting for machine to come up
	I0311 21:34:32.367999   70458 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.404768175s)
	I0311 21:34:32.368034   70458 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375444   70458 kubeadm.go:733] kubelet initialised
	I0311 21:34:32.375468   70458 kubeadm.go:734] duration metric: took 7.423643ms waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375477   70458 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:32.383579   70458 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.389728   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389755   70458 pod_ready.go:81] duration metric: took 6.144226ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.389766   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389775   70458 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.398797   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398822   70458 pod_ready.go:81] duration metric: took 9.033188ms for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.398833   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398841   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.407870   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407905   70458 pod_ready.go:81] duration metric: took 9.056349ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.407915   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407928   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.414434   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414455   70458 pod_ready.go:81] duration metric: took 6.519611ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.414463   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414468   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.771994   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772025   70458 pod_ready.go:81] duration metric: took 357.549783ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.772034   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772041   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.175562   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175595   70458 pod_ready.go:81] duration metric: took 403.546508ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.175608   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175617   70458 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.573749   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573777   70458 pod_ready.go:81] duration metric: took 398.141162ms for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.573789   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573799   70458 pod_ready.go:38] duration metric: took 1.198311127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:33.573862   70458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:34:33.592112   70458 ops.go:34] apiserver oom_adj: -16
	I0311 21:34:33.592148   70458 kubeadm.go:591] duration metric: took 11.193640837s to restartPrimaryControlPlane
	I0311 21:34:33.592161   70458 kubeadm.go:393] duration metric: took 11.247001751s to StartCluster
	I0311 21:34:33.592181   70458 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.592269   70458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:33.594144   70458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.594461   70458 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:34:33.596303   70458 out.go:177] * Verifying Kubernetes components...
	I0311 21:34:33.594553   70458 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:34:33.594702   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:33.597724   70458 addons.go:69] Setting default-storageclass=true in profile "no-preload-324578"
	I0311 21:34:33.597727   70458 addons.go:69] Setting storage-provisioner=true in profile "no-preload-324578"
	I0311 21:34:33.597739   70458 addons.go:69] Setting metrics-server=true in profile "no-preload-324578"
	I0311 21:34:33.597759   70458 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-324578"
	I0311 21:34:33.597771   70458 addons.go:234] Setting addon storage-provisioner=true in "no-preload-324578"
	I0311 21:34:33.597772   70458 addons.go:234] Setting addon metrics-server=true in "no-preload-324578"
	W0311 21:34:33.597780   70458 addons.go:243] addon storage-provisioner should already be in state true
	W0311 21:34:33.597795   70458 addons.go:243] addon metrics-server should already be in state true
	I0311 21:34:33.597828   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597838   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597733   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:33.598079   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598110   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598224   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598260   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598305   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598269   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.613473   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0311 21:34:33.613994   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.614558   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.614576   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.614946   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.615385   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.615415   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.618026   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0311 21:34:33.618201   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0311 21:34:33.618370   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618497   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618818   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618833   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.618978   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618989   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.619157   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619343   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.619389   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619926   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.619956   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.623211   70458 addons.go:234] Setting addon default-storageclass=true in "no-preload-324578"
	W0311 21:34:33.623232   70458 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:34:33.623260   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.623634   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.623660   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.635263   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I0311 21:34:33.635575   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.636071   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.636080   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.636462   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.636606   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.638520   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.640583   70458 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:34:33.642029   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:34:33.642045   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:34:33.642058   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.640562   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0311 21:34:33.641020   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0311 21:34:33.642572   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.643082   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.643107   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.643432   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.644002   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.644030   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.644213   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.644711   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.644733   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.645120   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.645334   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.645406   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.645861   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.645888   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.646042   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.646332   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.646548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.646719   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.646986   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.648681   70458 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:30.659466   70604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.244884989s)
	I0311 21:34:30.659492   70604 crio.go:451] duration metric: took 3.244983149s to extract the tarball
	I0311 21:34:30.659500   70604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:30.708661   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:30.769502   70604 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:34:30.769530   70604 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:34:30.769540   70604 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.28.4 crio true true} ...
	I0311 21:34:30.769675   70604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-743937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:30.769757   70604 ssh_runner.go:195] Run: crio config
	I0311 21:34:30.820223   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:30.820251   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:30.820267   70604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:30.820296   70604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-743937 NodeName:embed-certs-743937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:30.820475   70604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-743937"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:30.820563   70604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:34:30.833086   70604 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:30.833175   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:30.844335   70604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0311 21:34:30.863586   70604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:30.883598   70604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0311 21:34:30.904711   70604 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:30.909433   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:30.924054   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:31.064573   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:31.096931   70604 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937 for IP: 192.168.50.114
	I0311 21:34:31.096960   70604 certs.go:194] generating shared ca certs ...
	I0311 21:34:31.096980   70604 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:31.097157   70604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:31.097220   70604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:31.097236   70604 certs.go:256] generating profile certs ...
	I0311 21:34:31.097368   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/client.key
	I0311 21:34:31.097453   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key.c230aed9
	I0311 21:34:31.097520   70604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key
	I0311 21:34:31.097660   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:31.097709   70604 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:31.097770   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:31.097826   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:31.097867   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:31.097899   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:31.097958   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:31.098771   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:31.135109   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:31.173483   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:31.215059   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:31.253244   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0311 21:34:31.305450   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:31.340238   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:31.366993   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:31.393936   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:31.420998   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:31.446500   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:31.474047   70604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:31.493935   70604 ssh_runner.go:195] Run: openssl version
	I0311 21:34:31.500607   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:31.513874   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519255   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519303   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.525967   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:31.538995   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:31.551625   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557235   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557292   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.563658   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:31.576689   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:31.589299   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594405   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594453   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.601041   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:31.619307   70604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:31.624565   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:31.632121   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:31.638843   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:31.646400   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:31.652701   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:31.659661   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:31.666390   70604 kubeadm.go:391] StartCluster: {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:31.666496   70604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:31.666546   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.716714   70604 cri.go:89] found id: ""
	I0311 21:34:31.716796   70604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:31.733945   70604 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:31.733967   70604 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:31.733974   70604 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:31.734019   70604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:31.746543   70604 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:31.747720   70604 kubeconfig.go:125] found "embed-certs-743937" server: "https://192.168.50.114:8443"
	I0311 21:34:31.749670   70604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:31.762374   70604 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0311 21:34:31.762401   70604 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:31.762410   70604 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:31.762462   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.811965   70604 cri.go:89] found id: ""
	I0311 21:34:31.812055   70604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:31.836539   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:31.849272   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:31.849295   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:31.849348   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:31.861345   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:31.861423   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:31.875436   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:31.887183   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:31.887251   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:31.900032   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.911614   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:31.911690   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.924791   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:31.937131   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:31.937204   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:31.949123   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:31.960234   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.089622   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.806370   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.033263   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.135981   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.248827   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:33.248917   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.749207   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.650190   70458 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:33.650207   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:34:33.650223   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.653451   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.653895   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.653920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.654131   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.654302   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.654472   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.654631   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.689121   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0311 21:34:33.689487   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.693084   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.693105   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.693596   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.693796   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.696074   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.696629   70458 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:33.696644   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:34:33.696662   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.699920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700323   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.700342   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700564   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.700756   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.700859   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.700932   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.896331   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:33.969322   70458 node_ready.go:35] waiting up to 6m0s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:34.037114   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:34.059051   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:34:34.059080   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:34:34.094822   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:34.142231   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:34:34.142259   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:34:34.218979   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:34.219002   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:34:34.260381   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:35.648210   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.61103949s)
	I0311 21:34:35.648241   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.553388189s)
	I0311 21:34:35.648344   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648381   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648367   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648409   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648658   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.648675   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.648685   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648694   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648754   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.648997   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.649019   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650050   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650068   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650091   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.650101   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.650367   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650384   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.658738   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.658764   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.658991   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.659007   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687393   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.426969773s)
	I0311 21:34:35.687453   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687467   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687771   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.687810   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687828   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687848   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687831   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688142   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688164   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.688178   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.688214   70458 addons.go:470] Verifying addon metrics-server=true in "no-preload-324578"
	I0311 21:34:35.690413   70458 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:34:31.488010   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:31.488449   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:31.488471   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:31.488421   71714 retry.go:31] will retry after 2.325869089s: waiting for machine to come up
	I0311 21:34:33.815568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:33.816215   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:33.816236   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:33.816176   71714 retry.go:31] will retry after 2.457084002s: waiting for machine to come up
	I0311 21:34:34.249462   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.749177   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.778830   70604 api_server.go:72] duration metric: took 1.530004395s to wait for apiserver process to appear ...
	I0311 21:34:34.778858   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:34.778879   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:34.779469   70604 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0311 21:34:35.279027   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.110193   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.110221   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.110234   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.159861   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.159909   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.279045   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.289460   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.289491   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:38.779423   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.785174   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.785206   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.278910   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.290017   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:39.290054   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.779616   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.786362   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:34:39.794557   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:34:39.794583   70604 api_server.go:131] duration metric: took 5.01571788s to wait for apiserver health ...
	I0311 21:34:39.794594   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:39.794601   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:39.796063   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:35.691844   70458 addons.go:505] duration metric: took 2.097304232s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:34:35.974533   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:37.983073   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:38.977713   70458 node_ready.go:49] node "no-preload-324578" has status "Ready":"True"
	I0311 21:34:38.977738   70458 node_ready.go:38] duration metric: took 5.008382488s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:38.977749   70458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:38.986414   70458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993430   70458 pod_ready.go:92] pod "coredns-76f75df574-s6lsb" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:38.993454   70458 pod_ready.go:81] duration metric: took 7.012539ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993465   70458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:36.274640   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:36.275119   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:36.275157   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:36.275064   71714 retry.go:31] will retry after 3.618026102s: waiting for machine to come up
	I0311 21:34:39.894877   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:39.895397   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:39.895447   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:39.895343   71714 retry.go:31] will retry after 3.826847061s: waiting for machine to come up
	I0311 21:34:39.797420   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:39.810877   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:39.836773   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:39.852496   70604 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:39.852541   70604 system_pods.go:61] "coredns-5dd5756b68-czng9" [a57d0643-36c5-44e2-a113-de051d0e0408] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:39.852556   70604 system_pods.go:61] "etcd-embed-certs-743937" [9f0051e8-247f-4968-a834-c38c5f0c4407] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:39.852567   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [4ac979a6-1906-4a58-9d41-9587d66d81ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:39.852578   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [263ba100-e911-4857-a973-c4dc9312a653] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:39.852591   70604 system_pods.go:61] "kube-proxy-n2qzt" [21f56cfb-a3f5-4c4b-993d-53b6d8f60ec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:34:39.852600   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [0121fa4d-91a8-432b-9f21-c6e8c0b33872] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:39.852606   70604 system_pods.go:61] "metrics-server-57f55c9bc5-7qw98" [3d3f2e87-2e36-4ca3-b31c-fc5f38251f03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:39.852617   70604 system_pods.go:61] "storage-provisioner" [72fd13c7-1a79-4e8a-bdc2-f45117599d85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:34:39.852624   70604 system_pods.go:74] duration metric: took 15.823708ms to wait for pod list to return data ...
	I0311 21:34:39.852634   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:39.856288   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:39.856309   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:39.856317   70604 node_conditions.go:105] duration metric: took 3.676347ms to run NodePressure ...
	I0311 21:34:39.856331   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:40.103882   70604 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108726   70604 kubeadm.go:733] kubelet initialised
	I0311 21:34:40.108758   70604 kubeadm.go:734] duration metric: took 4.847245ms waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108768   70604 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:40.115566   70604 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:42.124435   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.026187   70417 start.go:364] duration metric: took 58.09976601s to acquireMachinesLock for "default-k8s-diff-port-766430"
	I0311 21:34:45.026231   70417 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:45.026242   70417 fix.go:54] fixHost starting: 
	I0311 21:34:45.026632   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:45.026661   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:45.046341   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0311 21:34:45.046779   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:45.047336   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:34:45.047375   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:45.047741   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:45.047920   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:34:45.048090   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:34:45.049581   70417 fix.go:112] recreateIfNeeded on default-k8s-diff-port-766430: state=Stopped err=<nil>
	I0311 21:34:45.049605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	W0311 21:34:45.049759   70417 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:45.051505   70417 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-766430" ...
	I0311 21:34:41.001474   70458 pod_ready.go:102] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:43.500991   70458 pod_ready.go:92] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.501018   70458 pod_ready.go:81] duration metric: took 4.507545237s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.501030   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506732   70458 pod_ready.go:92] pod "kube-apiserver-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.506753   70458 pod_ready.go:81] duration metric: took 5.714866ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506764   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511432   70458 pod_ready.go:92] pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.511456   70458 pod_ready.go:81] duration metric: took 4.684021ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511469   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516333   70458 pod_ready.go:92] pod "kube-proxy-rmz4b" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.516360   70458 pod_ready.go:81] duration metric: took 4.882955ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516370   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521501   70458 pod_ready.go:92] pod "kube-scheduler-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.521524   70458 pod_ready.go:81] duration metric: took 5.146945ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521532   70458 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.723851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724335   70908 main.go:141] libmachine: (old-k8s-version-239315) Found IP for machine: 192.168.72.52
	I0311 21:34:43.724367   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserving static IP address...
	I0311 21:34:43.724382   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has current primary IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724722   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.724759   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserved static IP address: 192.168.72.52
	I0311 21:34:43.724774   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | skip adding static IP to network mk-old-k8s-version-239315 - found existing host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"}
	I0311 21:34:43.724797   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Getting to WaitForSSH function...
	I0311 21:34:43.724815   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting for SSH to be available...
	I0311 21:34:43.727015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727330   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.727354   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727541   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH client type: external
	I0311 21:34:43.727568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa (-rw-------)
	I0311 21:34:43.727624   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:43.727641   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | About to run SSH command:
	I0311 21:34:43.727651   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | exit 0
	I0311 21:34:43.848884   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:43.849287   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:34:43.850084   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:43.852942   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853529   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.853572   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853801   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:34:43.854001   70908 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:43.854024   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:43.854255   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.856623   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857153   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.857187   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857321   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.857516   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857702   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857897   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.858105   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.858332   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.858349   70908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:43.961617   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:43.961664   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.961921   70908 buildroot.go:166] provisioning hostname "old-k8s-version-239315"
	I0311 21:34:43.961945   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.962134   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.964672   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.964987   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.965015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.965122   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.965305   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965466   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965591   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.965801   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.966042   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.966055   70908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239315 && echo "old-k8s-version-239315" | sudo tee /etc/hostname
	I0311 21:34:44.088097   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239315
	
	I0311 21:34:44.088126   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.090911   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091167   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.091205   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091347   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.091524   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091680   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091818   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.091984   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.092185   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.092205   70908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239315/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:44.207643   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:44.207674   70908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:44.207693   70908 buildroot.go:174] setting up certificates
	I0311 21:34:44.207701   70908 provision.go:84] configureAuth start
	I0311 21:34:44.207710   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:44.207975   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:44.211160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211556   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.211588   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211754   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.214211   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214553   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.214585   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214732   70908 provision.go:143] copyHostCerts
	I0311 21:34:44.214797   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:44.214813   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:44.214886   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:44.214991   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:44.215005   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:44.215035   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:44.215160   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:44.215171   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:44.215198   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:44.215267   70908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239315 san=[127.0.0.1 192.168.72.52 localhost minikube old-k8s-version-239315]
	I0311 21:34:44.305250   70908 provision.go:177] copyRemoteCerts
	I0311 21:34:44.305329   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:44.305367   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.308244   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308636   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.308673   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308874   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.309092   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.309290   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.309446   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.394958   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:44.423314   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 21:34:44.459338   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:44.491201   70908 provision.go:87] duration metric: took 283.487383ms to configureAuth
	I0311 21:34:44.491232   70908 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:44.491419   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:34:44.491484   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.494039   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494476   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.494509   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494638   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.494830   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.494998   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.495175   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.495366   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.495548   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.495570   70908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:44.787935   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:44.787961   70908 machine.go:97] duration metric: took 933.945971ms to provisionDockerMachine
	I0311 21:34:44.787971   70908 start.go:293] postStartSetup for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:34:44.787983   70908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:44.788007   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:44.788327   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:44.788355   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.791133   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791460   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.791492   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791637   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.791858   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.792021   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.792165   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.877163   70908 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:44.882141   70908 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:44.882164   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:44.882241   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:44.882330   70908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:44.882442   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:44.894699   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:44.919809   70908 start.go:296] duration metric: took 131.8264ms for postStartSetup
	I0311 21:34:44.919848   70908 fix.go:56] duration metric: took 21.376188092s for fixHost
	I0311 21:34:44.919867   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.922414   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922708   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.922738   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922876   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.923075   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923274   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923455   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.923618   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.923806   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.923831   70908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:45.026068   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192885.004450463
	
	I0311 21:34:45.026088   70908 fix.go:216] guest clock: 1710192885.004450463
	I0311 21:34:45.026096   70908 fix.go:229] Guest: 2024-03-11 21:34:45.004450463 +0000 UTC Remote: 2024-03-11 21:34:44.919851167 +0000 UTC m=+283.922086595 (delta=84.599296ms)
	I0311 21:34:45.026118   70908 fix.go:200] guest clock delta is within tolerance: 84.599296ms
	I0311 21:34:45.026124   70908 start.go:83] releasing machines lock for "old-k8s-version-239315", held for 21.482500591s
	I0311 21:34:45.026158   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.026440   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:45.029366   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029778   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.029813   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029992   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030514   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030711   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030800   70908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:45.030846   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.030946   70908 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:45.030971   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.033851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.033989   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034264   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034292   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034324   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034348   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034429   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034618   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034633   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034799   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034814   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034979   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034977   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.035143   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.135748   70908 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:45.142408   70908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:45.297445   70908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:45.304482   70908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:45.304552   70908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:45.322754   70908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:45.322775   70908 start.go:494] detecting cgroup driver to use...
	I0311 21:34:45.322832   70908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:45.345988   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:45.363267   70908 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:45.363320   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:45.380892   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:45.396972   70908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:45.531640   70908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:45.700243   70908 docker.go:233] disabling docker service ...
	I0311 21:34:45.700306   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:45.730542   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:45.749068   70908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:45.903721   70908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:46.045122   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:46.065278   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:46.090726   70908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:34:46.090779   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.105783   70908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:46.105841   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.121702   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.136262   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.150628   70908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:46.163771   70908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:46.175613   70908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:46.175675   70908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:46.193848   70908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:46.205694   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:46.344832   70908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:46.501773   70908 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:46.501851   70908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:46.507932   70908 start.go:562] Will wait 60s for crictl version
	I0311 21:34:46.507988   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:46.512337   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:46.555165   70908 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:46.555249   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.588554   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.623785   70908 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:34:44.627149   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.128405   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.052882   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Start
	I0311 21:34:45.053039   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring networks are active...
	I0311 21:34:45.053710   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network default is active
	I0311 21:34:45.054156   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network mk-default-k8s-diff-port-766430 is active
	I0311 21:34:45.054499   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Getting domain xml...
	I0311 21:34:45.055347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Creating domain...
	I0311 21:34:46.378216   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting to get IP...
	I0311 21:34:46.379054   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379376   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379485   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.379392   71893 retry.go:31] will retry after 242.915621ms: waiting for machine to come up
	I0311 21:34:46.623729   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624348   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624375   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.624304   71893 retry.go:31] will retry after 274.237436ms: waiting for machine to come up
	I0311 21:34:46.899864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.900296   71893 retry.go:31] will retry after 333.693752ms: waiting for machine to come up
	I0311 21:34:47.235751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236278   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236309   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.236220   71893 retry.go:31] will retry after 513.728994ms: waiting for machine to come up
	I0311 21:34:47.752081   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752622   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.752553   71893 retry.go:31] will retry after 575.202217ms: waiting for machine to come up
	I0311 21:34:48.329095   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329524   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329557   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:48.329477   71893 retry.go:31] will retry after 741.05703ms: waiting for machine to come up
	I0311 21:34:49.072641   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073163   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073195   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.073101   71893 retry.go:31] will retry after 802.911807ms: waiting for machine to come up
	I0311 21:34:45.528876   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.530391   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:49.530451   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:46.625154   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:46.627732   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628080   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:46.628102   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628304   70908 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:46.633367   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:46.649537   70908 kubeadm.go:877] updating cluster {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:46.649677   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:34:46.649733   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:46.699194   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:46.699264   70908 ssh_runner.go:195] Run: which lz4
	I0311 21:34:46.703944   70908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:46.709224   70908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:46.709258   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:34:48.747926   70908 crio.go:444] duration metric: took 2.044006932s to copy over tarball
	I0311 21:34:48.747994   70908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:49.629334   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:51.122454   70604 pod_ready.go:92] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:51.122481   70604 pod_ready.go:81] duration metric: took 11.006878828s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:51.122494   70604 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.227971   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.228001   70604 pod_ready.go:81] duration metric: took 1.105498501s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.228014   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234804   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.234834   70604 pod_ready.go:81] duration metric: took 6.811865ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234854   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241448   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.241473   70604 pod_ready.go:81] duration metric: took 6.611927ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241486   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249614   70604 pod_ready.go:92] pod "kube-proxy-n2qzt" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.249648   70604 pod_ready.go:81] duration metric: took 8.154372ms for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249661   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139924   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:53.139951   70604 pod_ready.go:81] duration metric: took 890.27792ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139961   70604 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:49.877965   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878438   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878460   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.878397   71893 retry.go:31] will retry after 1.163030899s: waiting for machine to come up
	I0311 21:34:51.042660   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043181   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043210   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:51.043131   71893 retry.go:31] will retry after 1.225509553s: waiting for machine to come up
	I0311 21:34:52.269779   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270321   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270358   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:52.270250   71893 retry.go:31] will retry after 2.091046831s: waiting for machine to come up
	I0311 21:34:54.363231   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363664   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363693   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:54.363618   71893 retry.go:31] will retry after 1.759309864s: waiting for machine to come up
	I0311 21:34:52.031032   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:54.529537   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:52.300295   70908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55227284s)
	I0311 21:34:52.300322   70908 crio.go:451] duration metric: took 3.552370125s to extract the tarball
	I0311 21:34:52.300331   70908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:52.349405   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:52.395791   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:52.395821   70908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:52.395892   70908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.395955   70908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.396002   70908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.396010   70908 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.395959   70908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.395932   70908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.395921   70908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:34:52.395974   70908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397721   70908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.397760   70908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.397767   70908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.397768   70908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.397762   70908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397804   70908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.398008   70908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:34:52.398129   70908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.548255   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.549300   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.560293   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.564094   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.564433   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.569516   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.578251   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:34:52.674385   70908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:34:52.674427   70908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.674475   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.725602   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.741797   70908 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:34:52.741840   70908 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.741882   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.793195   70908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:34:52.793239   70908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.793278   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798118   70908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:34:52.798174   70908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.798220   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798241   70908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:34:52.798277   70908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.798312   70908 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:34:52.798333   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798285   70908 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:34:52.798378   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.798399   70908 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.798434   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798336   70908 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:34:52.798510   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.957658   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.957712   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.957765   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.957816   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.957846   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:34:52.957904   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:34:52.957925   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:53.106649   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:34:53.106699   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:34:53.106913   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:34:53.107837   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:34:53.116024   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:34:53.122060   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:34:53.122118   70908 cache_images.go:92] duration metric: took 726.282306ms to LoadCachedImages
	W0311 21:34:53.122205   70908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0311 21:34:53.122224   70908 kubeadm.go:928] updating node { 192.168.72.52 8443 v1.20.0 crio true true} ...
	I0311 21:34:53.122341   70908 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239315 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:53.122443   70908 ssh_runner.go:195] Run: crio config
	I0311 21:34:53.192161   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:34:53.192191   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:53.192211   70908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:53.192233   70908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239315 NodeName:old-k8s-version-239315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:34:53.192405   70908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239315"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:53.192476   70908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:34:53.203965   70908 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:53.204019   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:53.215221   70908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0311 21:34:53.235943   70908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:53.255383   70908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0311 21:34:53.276634   70908 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:53.281778   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:53.298479   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:53.450052   70908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:53.472459   70908 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315 for IP: 192.168.72.52
	I0311 21:34:53.472480   70908 certs.go:194] generating shared ca certs ...
	I0311 21:34:53.472524   70908 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:53.472676   70908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:53.472728   70908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:53.472771   70908 certs.go:256] generating profile certs ...
	I0311 21:34:53.472883   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key
	I0311 21:34:53.472954   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1
	I0311 21:34:53.473013   70908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key
	I0311 21:34:53.473143   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:53.473185   70908 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:53.473198   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:53.473237   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:53.473272   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:53.473307   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:53.473363   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:53.473988   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:53.527429   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:53.575908   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:53.622438   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:53.665366   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 21:34:53.702121   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0311 21:34:53.746066   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:53.779151   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:53.813286   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:53.847058   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:53.882261   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:53.912444   70908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:53.932592   70908 ssh_runner.go:195] Run: openssl version
	I0311 21:34:53.939200   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:53.955630   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960866   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960920   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.967258   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:53.981075   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:53.995065   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000196   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000272   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.008574   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:54.022782   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:54.037409   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042893   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042965   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.049497   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:54.062597   70908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:54.067971   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:54.074746   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:54.081323   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:54.088762   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:54.095529   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:54.102396   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:54.109553   70908 kubeadm.go:391] StartCluster: {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:54.109639   70908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:54.109689   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.152063   70908 cri.go:89] found id: ""
	I0311 21:34:54.152143   70908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:54.163988   70908 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:54.164005   70908 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:54.164011   70908 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:54.164050   70908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:54.175616   70908 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:54.176779   70908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:54.177542   70908 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-11004/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239315" cluster setting kubeconfig missing "old-k8s-version-239315" context setting]
	I0311 21:34:54.178649   70908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:54.180405   70908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:54.191864   70908 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.52
	I0311 21:34:54.191891   70908 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:54.191903   70908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:54.191948   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.233779   70908 cri.go:89] found id: ""
	I0311 21:34:54.233852   70908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:54.253672   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:54.266010   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:54.266038   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:54.266085   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:54.277867   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:54.277918   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:54.288984   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:54.300133   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:54.300197   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:54.312090   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.323997   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:54.324059   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.337225   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:54.348223   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:54.348266   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:54.359245   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:54.370003   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:54.525972   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.408437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.676995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.819933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.913736   70908 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:55.913811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:55.147500   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:57.148276   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.124678   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125150   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125183   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:56.125101   71893 retry.go:31] will retry after 2.284226205s: waiting for machine to come up
	I0311 21:34:58.412391   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.412973   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.413002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:58.412923   71893 retry.go:31] will retry after 4.532871869s: waiting for machine to come up
	I0311 21:34:57.031683   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:59.032261   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:56.914753   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.413928   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.914123   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.413931   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.914199   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.414205   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.913880   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.914121   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.148774   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.646997   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:03.647990   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:02.948316   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948762   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948790   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:35:02.948704   71893 retry.go:31] will retry after 4.885152649s: waiting for machine to come up
	I0311 21:35:01.529589   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:04.028860   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.414003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:01.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.913977   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.414740   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.914735   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.414726   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.914846   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.914715   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.648516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:08.147744   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:07.835002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.835551   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Found IP for machine: 192.168.61.11
	I0311 21:35:07.835585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserving static IP address...
	I0311 21:35:07.835601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has current primary IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.836026   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.836055   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | skip adding static IP to network mk-default-k8s-diff-port-766430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"}
	I0311 21:35:07.836075   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserved static IP address: 192.168.61.11
	I0311 21:35:07.836110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Getting to WaitForSSH function...
	I0311 21:35:07.836125   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for SSH to be available...
	I0311 21:35:07.838230   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.838631   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838757   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH client type: external
	I0311 21:35:07.838784   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa (-rw-------)
	I0311 21:35:07.838830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:35:07.838871   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | About to run SSH command:
	I0311 21:35:07.838897   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | exit 0
	I0311 21:35:07.968765   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | SSH cmd err, output: <nil>: 
	I0311 21:35:07.969119   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetConfigRaw
	I0311 21:35:07.969756   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:07.972490   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.972921   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.972949   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.973180   70417 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/config.json ...
	I0311 21:35:07.973362   70417 machine.go:94] provisionDockerMachine start ...
	I0311 21:35:07.973381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:07.973582   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:07.975926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.976277   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976419   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:07.976566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976704   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976847   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:07.976991   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:07.977161   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:07.977171   70417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:35:08.093841   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:35:08.093864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094076   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:35:08.094100   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094329   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.097134   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097498   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.097528   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097670   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.097854   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098021   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.098409   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.098642   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.098657   70417 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766430 && echo "default-k8s-diff-port-766430" | sudo tee /etc/hostname
	I0311 21:35:08.233860   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766430
	
	I0311 21:35:08.233890   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.236977   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237387   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.237408   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237596   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.237791   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.237962   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.238194   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.238359   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.238515   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.238532   70417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:35:08.363393   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:35:08.363419   70417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:35:08.363471   70417 buildroot.go:174] setting up certificates
	I0311 21:35:08.363484   70417 provision.go:84] configureAuth start
	I0311 21:35:08.363497   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.363780   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:08.366605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.366990   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.367012   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.367139   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.369314   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369650   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.369676   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369798   70417 provision.go:143] copyHostCerts
	I0311 21:35:08.369853   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:35:08.369863   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:35:08.369915   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:35:08.370005   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:35:08.370013   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:35:08.370032   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:35:08.370091   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:35:08.370098   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:35:08.370114   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:35:08.370169   70417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766430 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-766430 localhost minikube]
	I0311 21:35:08.542469   70417 provision.go:177] copyRemoteCerts
	I0311 21:35:08.542529   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:35:08.542550   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.545388   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545750   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.545782   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545958   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.546115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.546264   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.546360   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:08.635866   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:35:08.667490   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0311 21:35:08.697944   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:35:08.726836   70417 provision.go:87] duration metric: took 363.34159ms to configureAuth
	I0311 21:35:08.726860   70417 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:35:08.727033   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:35:08.727115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.730050   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730458   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.730489   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730788   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.730987   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731170   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731317   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.731466   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.731607   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.731629   70417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:35:09.035100   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:35:09.035129   70417 machine.go:97] duration metric: took 1.061753229s to provisionDockerMachine
	I0311 21:35:09.035142   70417 start.go:293] postStartSetup for "default-k8s-diff-port-766430" (driver="kvm2")
	I0311 21:35:09.035151   70417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:35:09.035165   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.035458   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:35:09.035484   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.038340   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038638   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.038668   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038829   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.039027   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.039178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.039343   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.133013   70417 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:35:09.138043   70417 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:35:09.138065   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:35:09.138166   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:35:09.138259   70417 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:35:09.138364   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:35:09.149527   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:09.176424   70417 start.go:296] duration metric: took 141.271199ms for postStartSetup
	I0311 21:35:09.176460   70417 fix.go:56] duration metric: took 24.15021813s for fixHost
	I0311 21:35:09.176479   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.179447   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.179830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.179859   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.180147   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.180402   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180758   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.180974   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:09.181186   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:09.181200   70417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:35:09.297740   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192909.282566583
	
	I0311 21:35:09.297764   70417 fix.go:216] guest clock: 1710192909.282566583
	I0311 21:35:09.297773   70417 fix.go:229] Guest: 2024-03-11 21:35:09.282566583 +0000 UTC Remote: 2024-03-11 21:35:09.176465496 +0000 UTC m=+364.839103648 (delta=106.101087ms)
	I0311 21:35:09.297795   70417 fix.go:200] guest clock delta is within tolerance: 106.101087ms
	I0311 21:35:09.297802   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 24.271590337s
	I0311 21:35:09.297825   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.298067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:09.300989   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301399   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.301422   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301604   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302091   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302291   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302385   70417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:35:09.302433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.302490   70417 ssh_runner.go:195] Run: cat /version.json
	I0311 21:35:09.302515   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.305403   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305572   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305802   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.305831   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305912   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306042   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.306223   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306430   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.306511   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306645   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306772   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:06.528726   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.029055   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.419852   70417 ssh_runner.go:195] Run: systemctl --version
	I0311 21:35:09.427141   70417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:35:09.579321   70417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:35:09.586396   70417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:35:09.586470   70417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:35:09.606617   70417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:35:09.606639   70417 start.go:494] detecting cgroup driver to use...
	I0311 21:35:09.606705   70417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:35:09.627066   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:35:09.646091   70417 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:35:09.646151   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:35:09.662307   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:35:09.679793   70417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:35:09.828827   70417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:35:09.984773   70417 docker.go:233] disabling docker service ...
	I0311 21:35:09.984843   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:35:10.003968   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:35:10.018609   70417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:35:10.174297   70417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:35:10.316762   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:35:10.338008   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:35:10.359320   70417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:35:10.359374   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.371953   70417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:35:10.372008   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.384823   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.397305   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.409521   70417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:35:10.424714   70417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:35:10.438470   70417 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:35:10.438529   70417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:35:10.454436   70417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:35:10.465004   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:10.611379   70417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:35:10.786860   70417 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:35:10.786959   70417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:35:10.792496   70417 start.go:562] Will wait 60s for crictl version
	I0311 21:35:10.792551   70417 ssh_runner.go:195] Run: which crictl
	I0311 21:35:10.797079   70417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:35:10.837010   70417 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:35:10.837086   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.868308   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.900087   70417 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:35:06.414389   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:06.914233   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.414565   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.914773   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.414348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.914743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.413987   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.914698   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.150688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:12.648444   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:10.901304   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:10.904103   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904380   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:10.904407   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904557   70417 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0311 21:35:10.909585   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:10.924163   70417 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:35:10.924311   70417 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:35:10.924408   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:10.969555   70417 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:35:10.969623   70417 ssh_runner.go:195] Run: which lz4
	I0311 21:35:10.974054   70417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:35:10.978776   70417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:35:10.978811   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:35:12.893346   70417 crio.go:444] duration metric: took 1.91931676s to copy over tarball
	I0311 21:35:12.893421   70417 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:35:11.031301   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:13.527896   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:11.414320   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:11.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.414529   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.914476   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.414282   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.914426   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.414521   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.914001   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.414839   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.913921   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.648625   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.148688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:15.772070   70417 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.878627154s)
	I0311 21:35:15.772094   70417 crio.go:451] duration metric: took 2.878719213s to extract the tarball
	I0311 21:35:15.772101   70417 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:35:15.818581   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:15.872635   70417 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:35:15.872658   70417 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:35:15.872667   70417 kubeadm.go:928] updating node { 192.168.61.11 8444 v1.28.4 crio true true} ...
	I0311 21:35:15.872823   70417 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-766430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:35:15.872933   70417 ssh_runner.go:195] Run: crio config
	I0311 21:35:15.928776   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:15.928803   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:15.928818   70417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:35:15.928843   70417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766430 NodeName:default-k8s-diff-port-766430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:35:15.929018   70417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:35:15.929090   70417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:35:15.941853   70417 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:35:15.941908   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:35:15.954936   70417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0311 21:35:15.975236   70417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:35:15.994509   70417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0311 21:35:16.014058   70417 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0311 21:35:16.018972   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:16.035169   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:16.160453   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:35:16.182252   70417 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430 for IP: 192.168.61.11
	I0311 21:35:16.182272   70417 certs.go:194] generating shared ca certs ...
	I0311 21:35:16.182286   70417 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:35:16.182419   70417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:35:16.182465   70417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:35:16.182475   70417 certs.go:256] generating profile certs ...
	I0311 21:35:16.182545   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/client.key
	I0311 21:35:16.182601   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key.2c00376c
	I0311 21:35:16.182635   70417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key
	I0311 21:35:16.182754   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:35:16.182783   70417 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:35:16.182789   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:35:16.182823   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:35:16.182844   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:35:16.182867   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:35:16.182901   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:16.183517   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:35:16.231409   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:35:16.277004   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:35:16.315346   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:35:16.352697   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0311 21:35:16.388570   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:35:16.422830   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:35:16.452562   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 21:35:16.480976   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:35:16.507149   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:35:16.535832   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:35:16.566697   70417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:35:16.587454   70417 ssh_runner.go:195] Run: openssl version
	I0311 21:35:16.593880   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:35:16.608197   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613604   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613673   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.620156   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:35:16.632634   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:35:16.646047   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652530   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652591   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.660480   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:35:16.673572   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:35:16.687161   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692589   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692632   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.705471   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:35:16.718251   70417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:35:16.723979   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:35:16.731335   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:35:16.738485   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:35:16.745489   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:35:16.752295   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:35:16.759251   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:35:16.766128   70417 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:35:16.766237   70417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:35:16.766292   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.806418   70417 cri.go:89] found id: ""
	I0311 21:35:16.806478   70417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:35:16.821434   70417 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:35:16.821455   70417 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:35:16.821462   70417 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:35:16.821514   70417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:35:16.835457   70417 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:35:16.836764   70417 kubeconfig.go:125] found "default-k8s-diff-port-766430" server: "https://192.168.61.11:8444"
	I0311 21:35:16.839163   70417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:35:16.850037   70417 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.11
	I0311 21:35:16.850065   70417 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:35:16.850074   70417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:35:16.850117   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.895532   70417 cri.go:89] found id: ""
	I0311 21:35:16.895612   70417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:35:16.913151   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:35:16.927989   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:35:16.928014   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:35:16.928073   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:35:16.939803   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:35:16.939849   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:35:16.950103   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:35:16.960164   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:35:16.960213   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:35:16.970349   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.980056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:35:16.980098   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.990189   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:35:16.999799   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:35:16.999874   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:35:17.010502   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:35:17.021106   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:17.136170   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.044684   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.296278   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.376702   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.473740   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:35:18.473840   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.974894   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.529099   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.755777   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:20.028341   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:16.414018   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:16.914685   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.414894   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.914319   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.414875   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.914338   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.414496   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.914396   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.414731   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.914149   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.648967   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:22.148024   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:19.474609   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.499907   70417 api_server.go:72] duration metric: took 1.026169594s to wait for apiserver process to appear ...
	I0311 21:35:19.499931   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:35:19.499951   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:19.500566   70417 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0311 21:35:20.000807   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.693958   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:35:22.693991   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:35:22.694006   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.772747   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:22.772792   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.000004   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.004763   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.004805   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.500112   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.507209   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.507236   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.000861   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.006793   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:24.006830   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.500264   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.508242   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:35:24.520230   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:35:24.520255   70417 api_server.go:131] duration metric: took 5.020318338s to wait for apiserver health ...
	I0311 21:35:24.520285   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:24.520291   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:24.522151   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:35:22.029963   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.530052   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:21.414126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:21.914012   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.414680   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.414478   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.914770   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.414370   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.914772   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.413991   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.914516   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.149179   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.647134   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.647725   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.523964   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:35:24.538536   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:35:24.583279   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:35:24.594703   70417 system_pods.go:59] 8 kube-system pods found
	I0311 21:35:24.594730   70417 system_pods.go:61] "coredns-5dd5756b68-pkn9d" [ee4de3f7-1044-4dc9-91dc-d9b23493b0bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:35:24.594737   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [96b9327c-f97d-463f-9d1e-3210b4032aab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:35:24.594751   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [fc650f48-2e28-4219-8571-8b6c43891eb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:35:24.594763   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [c7cc5d40-ad56-4132-ab81-3422ffe1d5b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:35:24.594772   70417 system_pods.go:61] "kube-proxy-cggzr" [f6b7fe4e-7d57-4604-b63d-f9890826b659] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:35:24.594784   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [8a156fec-b2f3-46e8-bf0d-0bf291ef8783] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:35:24.594795   70417 system_pods.go:61] "metrics-server-57f55c9bc5-kxl6n" [ac62700b-a39a-480e-841e-852bf3c66e7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:35:24.594805   70417 system_pods.go:61] "storage-provisioner" [a0b03582-0d90-4a7f-919c-0552046edcb5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:35:24.594821   70417 system_pods.go:74] duration metric: took 11.523907ms to wait for pod list to return data ...
	I0311 21:35:24.594830   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:35:24.606500   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:35:24.606529   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:35:24.606546   70417 node_conditions.go:105] duration metric: took 11.711241ms to run NodePressure ...
	I0311 21:35:24.606565   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:24.893361   70417 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899200   70417 kubeadm.go:733] kubelet initialised
	I0311 21:35:24.899225   70417 kubeadm.go:734] duration metric: took 5.837351ms waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899235   70417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:35:24.905858   70417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:26.912640   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.916566   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:27.029381   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:29.529565   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.414267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:26.914876   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.414469   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.914513   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.414924   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.914126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.414526   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.914039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.414305   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.914438   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.147527   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:33.147694   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.413246   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.912878   70417 pod_ready.go:92] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:31.912899   70417 pod_ready.go:81] duration metric: took 7.007017714s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:31.912908   70417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:33.977091   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:32.029295   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:34.529021   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.414610   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.914472   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.414158   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.914169   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.414745   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.914820   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.414071   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.914228   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.414135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.914695   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.148058   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:37.648200   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.422565   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.921304   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.921328   70417 pod_ready.go:81] duration metric: took 5.008411943s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.921340   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927268   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.927284   70417 pod_ready.go:81] duration metric: took 5.936969ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927292   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932540   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.932563   70417 pod_ready.go:81] duration metric: took 5.264737ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932575   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937456   70417 pod_ready.go:92] pod "kube-proxy-cggzr" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.937473   70417 pod_ready.go:81] duration metric: took 4.892276ms for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937480   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942372   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.942390   70417 pod_ready.go:81] duration metric: took 4.902792ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942401   70417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:38.949452   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.531316   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:39.030491   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.414435   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:36.914157   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.414539   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.914811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.414070   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.914303   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.413935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.914135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.414569   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.914106   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.147355   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.148353   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:40.950204   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.950335   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.528874   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:43.530140   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.414404   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:41.914323   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.414215   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.914566   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.414671   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.914658   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.414703   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.913966   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.414045   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.914260   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.648282   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.148247   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:45.449963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.451576   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.029164   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:48.529137   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:46.914821   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.414210   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.914008   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.413884   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.914160   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.414877   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.914379   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.414293   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.913867   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.148585   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.648372   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:49.949667   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.950874   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.953067   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:50.529616   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.030586   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.414582   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:51.914453   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.414668   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.914816   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.414768   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.914592   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.414743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.914307   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.414000   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.914553   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:55.914636   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:55.957434   70908 cri.go:89] found id: ""
	I0311 21:35:55.957459   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.957470   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:55.957477   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:55.957545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:55.995255   70908 cri.go:89] found id: ""
	I0311 21:35:55.995279   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.995290   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:55.995305   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:55.995364   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:56.038893   70908 cri.go:89] found id: ""
	I0311 21:35:56.038916   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.038926   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:56.038933   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:56.038990   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:54.147165   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.148641   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.647841   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.451057   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.950421   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:55.528922   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.029209   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:00.029912   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.081497   70908 cri.go:89] found id: ""
	I0311 21:35:56.081517   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.081528   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:56.081534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:56.081591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:56.120047   70908 cri.go:89] found id: ""
	I0311 21:35:56.120071   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.120079   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:56.120084   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:56.120156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:56.157350   70908 cri.go:89] found id: ""
	I0311 21:35:56.157370   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.157377   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:56.157382   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:56.157433   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:56.198324   70908 cri.go:89] found id: ""
	I0311 21:35:56.198354   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.198374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:56.198381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:56.198437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:56.236579   70908 cri.go:89] found id: ""
	I0311 21:35:56.236608   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.236619   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:56.236691   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:56.236712   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:56.377789   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:35:56.377809   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:56.377825   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:56.449765   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:56.449807   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:56.502417   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:56.502448   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:56.557205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:56.557241   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.073411   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:59.088205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:59.088287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:59.126458   70908 cri.go:89] found id: ""
	I0311 21:35:59.126486   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.126494   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:59.126499   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:59.126555   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:59.197887   70908 cri.go:89] found id: ""
	I0311 21:35:59.197911   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.197919   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:59.197924   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:59.197967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:59.239523   70908 cri.go:89] found id: ""
	I0311 21:35:59.239552   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.239562   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:59.239570   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:59.239642   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:59.280903   70908 cri.go:89] found id: ""
	I0311 21:35:59.280930   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.280940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:59.280947   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:59.281024   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:59.320218   70908 cri.go:89] found id: ""
	I0311 21:35:59.320242   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.320254   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:59.320260   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:59.320314   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:59.361235   70908 cri.go:89] found id: ""
	I0311 21:35:59.361265   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.361276   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:59.361283   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:59.361352   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:59.409477   70908 cri.go:89] found id: ""
	I0311 21:35:59.409503   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.409514   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:59.409522   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:59.409568   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:59.454704   70908 cri.go:89] found id: ""
	I0311 21:35:59.454728   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.454739   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:59.454748   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:59.454767   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:59.525839   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:59.525864   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:59.569577   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:59.569606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:59.628402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:59.628437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.647181   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:59.647208   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:59.731300   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:00.650515   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.146560   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:01.449702   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.950341   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.030569   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:04.529453   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.232458   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:02.246948   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:02.247025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:02.290561   70908 cri.go:89] found id: ""
	I0311 21:36:02.290588   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.290599   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:02.290605   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:02.290659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:02.333788   70908 cri.go:89] found id: ""
	I0311 21:36:02.333814   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.333821   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:02.333826   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:02.333877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:02.375774   70908 cri.go:89] found id: ""
	I0311 21:36:02.375798   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.375806   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:02.375812   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:02.375862   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:02.414741   70908 cri.go:89] found id: ""
	I0311 21:36:02.414781   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.414803   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:02.414810   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:02.414875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:02.456637   70908 cri.go:89] found id: ""
	I0311 21:36:02.456660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.456670   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:02.456677   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:02.456759   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:02.494633   70908 cri.go:89] found id: ""
	I0311 21:36:02.494660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.494670   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:02.494678   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:02.494738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:02.536187   70908 cri.go:89] found id: ""
	I0311 21:36:02.536212   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.536223   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:02.536230   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:02.536291   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:02.574933   70908 cri.go:89] found id: ""
	I0311 21:36:02.574962   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.574973   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:02.574985   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:02.575001   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:02.656610   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:02.656637   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:02.656653   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:02.730514   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:02.730548   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:02.776009   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:02.776041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:02.829792   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:02.829826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.345568   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:05.360082   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:05.360164   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:05.406106   70908 cri.go:89] found id: ""
	I0311 21:36:05.406131   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.406141   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:05.406147   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:05.406203   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:05.449584   70908 cri.go:89] found id: ""
	I0311 21:36:05.449608   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.449617   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:05.449624   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:05.449680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:05.493869   70908 cri.go:89] found id: ""
	I0311 21:36:05.493898   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.493912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:05.493928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:05.493994   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:05.563506   70908 cri.go:89] found id: ""
	I0311 21:36:05.563532   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.563542   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:05.563549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:05.563600   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:05.630140   70908 cri.go:89] found id: ""
	I0311 21:36:05.630165   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.630172   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:05.630177   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:05.630230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:05.675584   70908 cri.go:89] found id: ""
	I0311 21:36:05.675612   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.675623   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:05.675631   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:05.675689   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:05.720521   70908 cri.go:89] found id: ""
	I0311 21:36:05.720548   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.720557   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:05.720563   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:05.720615   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:05.759323   70908 cri.go:89] found id: ""
	I0311 21:36:05.759351   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.759359   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:05.759367   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:05.759379   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:05.801024   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:05.801050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:05.856330   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:05.856356   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.871299   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:05.871324   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:05.950218   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:05.950245   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:05.950259   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:05.148227   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.647389   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:05.950833   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.449548   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.028964   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:09.029396   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.535502   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:08.552152   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:08.552220   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:08.596602   70908 cri.go:89] found id: ""
	I0311 21:36:08.596707   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.596731   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:08.596755   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:08.596820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:08.641091   70908 cri.go:89] found id: ""
	I0311 21:36:08.641119   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.641130   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:08.641137   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:08.641198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:08.684466   70908 cri.go:89] found id: ""
	I0311 21:36:08.684494   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.684503   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:08.684510   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:08.684570   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:08.730899   70908 cri.go:89] found id: ""
	I0311 21:36:08.730924   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.730931   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:08.730937   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:08.730997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:08.775293   70908 cri.go:89] found id: ""
	I0311 21:36:08.775317   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.775324   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:08.775330   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:08.775387   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:08.816098   70908 cri.go:89] found id: ""
	I0311 21:36:08.816126   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.816137   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:08.816144   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:08.816207   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:08.857413   70908 cri.go:89] found id: ""
	I0311 21:36:08.857449   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.857460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:08.857476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:08.857541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:08.898252   70908 cri.go:89] found id: ""
	I0311 21:36:08.898283   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.898293   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:08.898302   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:08.898313   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:08.955162   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:08.955188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:08.970234   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:08.970258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:09.055025   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:09.055043   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:09.055055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:09.140345   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:09.140376   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:10.148323   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.647037   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:10.450796   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.450839   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.529842   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.029706   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.681542   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:11.697407   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:11.697481   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:11.740239   70908 cri.go:89] found id: ""
	I0311 21:36:11.740264   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.740274   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:11.740280   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:11.740336   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:11.777625   70908 cri.go:89] found id: ""
	I0311 21:36:11.777655   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.777667   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:11.777674   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:11.777745   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:11.817202   70908 cri.go:89] found id: ""
	I0311 21:36:11.817226   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.817233   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:11.817239   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:11.817306   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:11.858912   70908 cri.go:89] found id: ""
	I0311 21:36:11.858933   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.858940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:11.858945   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:11.858998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:11.897841   70908 cri.go:89] found id: ""
	I0311 21:36:11.897876   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.897887   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:11.897895   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:11.897955   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:11.936181   70908 cri.go:89] found id: ""
	I0311 21:36:11.936207   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.936218   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:11.936226   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:11.936293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:11.981882   70908 cri.go:89] found id: ""
	I0311 21:36:11.981905   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.981915   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:11.981922   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:11.981982   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:12.022270   70908 cri.go:89] found id: ""
	I0311 21:36:12.022298   70908 logs.go:276] 0 containers: []
	W0311 21:36:12.022309   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:12.022320   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:12.022333   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:12.074640   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:12.074668   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:12.089854   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:12.089879   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:12.179578   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:12.179595   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:12.179606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:12.263249   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:12.263285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:14.811547   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:14.827075   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:14.827175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:14.870512   70908 cri.go:89] found id: ""
	I0311 21:36:14.870544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.870555   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:14.870563   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:14.870625   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:14.908521   70908 cri.go:89] found id: ""
	I0311 21:36:14.908544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.908553   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:14.908558   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:14.908607   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:14.951702   70908 cri.go:89] found id: ""
	I0311 21:36:14.951729   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.951739   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:14.951746   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:14.951805   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:14.992590   70908 cri.go:89] found id: ""
	I0311 21:36:14.992618   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.992630   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:14.992638   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:14.992698   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:15.034535   70908 cri.go:89] found id: ""
	I0311 21:36:15.034556   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.034563   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:15.034569   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:15.034614   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:15.077175   70908 cri.go:89] found id: ""
	I0311 21:36:15.077200   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.077210   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:15.077218   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:15.077283   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:15.121500   70908 cri.go:89] found id: ""
	I0311 21:36:15.121530   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.121541   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:15.121549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:15.121655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:15.162712   70908 cri.go:89] found id: ""
	I0311 21:36:15.162738   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.162748   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:15.162757   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:15.162776   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:15.241469   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:15.241488   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:15.241499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:15.322257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:15.322291   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:15.368258   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:15.368285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:15.427131   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:15.427163   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:14.648776   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.148710   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.452948   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.949085   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.950111   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.030409   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.529122   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.944348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:17.958629   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:17.958704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:17.995869   70908 cri.go:89] found id: ""
	I0311 21:36:17.995895   70908 logs.go:276] 0 containers: []
	W0311 21:36:17.995904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:17.995914   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:17.995976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:18.032273   70908 cri.go:89] found id: ""
	I0311 21:36:18.032300   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.032308   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:18.032313   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:18.032361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:18.072497   70908 cri.go:89] found id: ""
	I0311 21:36:18.072519   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.072526   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:18.072532   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:18.072578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:18.110091   70908 cri.go:89] found id: ""
	I0311 21:36:18.110119   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.110129   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:18.110136   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:18.110199   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:18.152217   70908 cri.go:89] found id: ""
	I0311 21:36:18.152261   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.152272   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:18.152280   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:18.152347   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:18.193957   70908 cri.go:89] found id: ""
	I0311 21:36:18.193989   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.194000   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:18.194008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:18.194086   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:18.231828   70908 cri.go:89] found id: ""
	I0311 21:36:18.231861   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.231873   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:18.231880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:18.231939   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:18.271862   70908 cri.go:89] found id: ""
	I0311 21:36:18.271896   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.271907   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:18.271917   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:18.271933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:18.325405   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:18.325440   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:18.344560   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:18.344593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:18.425051   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:18.425075   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:18.425093   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:18.513247   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:18.513287   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:19.646758   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.647702   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.649318   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.450692   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.950088   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.028812   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.029828   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.060499   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:21.076648   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:21.076716   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:21.117270   70908 cri.go:89] found id: ""
	I0311 21:36:21.117298   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.117309   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:21.117317   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:21.117388   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:21.159005   70908 cri.go:89] found id: ""
	I0311 21:36:21.159045   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.159056   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:21.159063   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:21.159122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:21.196576   70908 cri.go:89] found id: ""
	I0311 21:36:21.196599   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.196609   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:21.196617   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:21.196677   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:21.237689   70908 cri.go:89] found id: ""
	I0311 21:36:21.237718   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.237729   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:21.237734   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:21.237783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:21.280662   70908 cri.go:89] found id: ""
	I0311 21:36:21.280696   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.280707   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:21.280714   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:21.280795   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:21.321475   70908 cri.go:89] found id: ""
	I0311 21:36:21.321501   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.321511   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:21.321518   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:21.321581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:21.365186   70908 cri.go:89] found id: ""
	I0311 21:36:21.365209   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.365216   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:21.365221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:21.365276   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:21.408678   70908 cri.go:89] found id: ""
	I0311 21:36:21.408713   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.408725   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:21.408754   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:21.408771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:21.466635   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:21.466663   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:21.482596   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:21.482622   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:21.556750   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:21.556769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:21.556780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:21.643095   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:21.643126   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.195112   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:24.208829   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:24.208895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:24.245956   70908 cri.go:89] found id: ""
	I0311 21:36:24.245981   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.245989   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:24.245995   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:24.246053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:24.289740   70908 cri.go:89] found id: ""
	I0311 21:36:24.289766   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.289778   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:24.289784   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:24.289846   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:24.336911   70908 cri.go:89] found id: ""
	I0311 21:36:24.336963   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.336977   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:24.336986   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:24.337057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:24.381715   70908 cri.go:89] found id: ""
	I0311 21:36:24.381739   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.381753   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:24.381761   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:24.381817   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:24.423759   70908 cri.go:89] found id: ""
	I0311 21:36:24.423787   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.423797   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:24.423805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:24.423882   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:24.468903   70908 cri.go:89] found id: ""
	I0311 21:36:24.468931   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.468946   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:24.468954   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:24.469013   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:24.509602   70908 cri.go:89] found id: ""
	I0311 21:36:24.509629   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.509639   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:24.509646   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:24.509706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:24.551483   70908 cri.go:89] found id: ""
	I0311 21:36:24.551511   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.551522   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:24.551532   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:24.551545   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:24.567123   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:24.567154   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:24.644215   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:24.644247   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:24.644262   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:24.726438   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:24.726469   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.779567   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:24.779596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:26.146823   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.148291   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:26.450637   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.949850   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:25.528542   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.529375   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:29.529701   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.337785   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:27.352504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:27.352578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:27.395787   70908 cri.go:89] found id: ""
	I0311 21:36:27.395809   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.395817   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:27.395823   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:27.395869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:27.441800   70908 cri.go:89] found id: ""
	I0311 21:36:27.441826   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.441834   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:27.441839   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:27.441893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:27.481761   70908 cri.go:89] found id: ""
	I0311 21:36:27.481791   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.481802   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:27.481809   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:27.481868   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:27.526981   70908 cri.go:89] found id: ""
	I0311 21:36:27.527011   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.527029   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:27.527037   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:27.527130   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:27.566569   70908 cri.go:89] found id: ""
	I0311 21:36:27.566602   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.566614   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:27.566622   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:27.566682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:27.607434   70908 cri.go:89] found id: ""
	I0311 21:36:27.607456   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.607464   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:27.607469   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:27.607529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:27.652648   70908 cri.go:89] found id: ""
	I0311 21:36:27.652674   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.652681   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:27.652686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:27.652756   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:27.691105   70908 cri.go:89] found id: ""
	I0311 21:36:27.691136   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.691148   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:27.691158   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:27.691173   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:27.706451   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:27.706477   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:27.788935   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:27.788959   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:27.788975   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:27.875721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:27.875758   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:27.927920   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:27.927951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.487728   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:30.503425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:30.503508   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:30.550846   70908 cri.go:89] found id: ""
	I0311 21:36:30.550868   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.550875   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:30.550881   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:30.550928   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:30.586886   70908 cri.go:89] found id: ""
	I0311 21:36:30.586915   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.586925   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:30.586934   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:30.586991   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:30.627849   70908 cri.go:89] found id: ""
	I0311 21:36:30.627884   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.627895   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:30.627902   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:30.627965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:30.669188   70908 cri.go:89] found id: ""
	I0311 21:36:30.669209   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.669216   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:30.669222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:30.669266   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:30.711676   70908 cri.go:89] found id: ""
	I0311 21:36:30.711697   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.711705   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:30.711710   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:30.711758   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:30.754218   70908 cri.go:89] found id: ""
	I0311 21:36:30.754240   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.754248   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:30.754253   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:30.754299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:30.791224   70908 cri.go:89] found id: ""
	I0311 21:36:30.791255   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.791263   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:30.791269   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:30.791328   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:30.831263   70908 cri.go:89] found id: ""
	I0311 21:36:30.831291   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.831301   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:30.831311   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:30.831326   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:30.876574   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:30.876600   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.928483   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:30.928509   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:30.944642   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:30.944665   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:31.026406   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:31.026428   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:31.026444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:30.648859   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.147907   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:30.952483   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.451714   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:32.028484   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:34.028948   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.611104   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:33.625644   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:33.625706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:33.664787   70908 cri.go:89] found id: ""
	I0311 21:36:33.664816   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.664825   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:33.664830   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:33.664894   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:33.704636   70908 cri.go:89] found id: ""
	I0311 21:36:33.704659   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.704666   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:33.704672   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:33.704717   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:33.744797   70908 cri.go:89] found id: ""
	I0311 21:36:33.744837   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.744848   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:33.744855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:33.744917   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:33.787435   70908 cri.go:89] found id: ""
	I0311 21:36:33.787464   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.787474   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:33.787482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:33.787541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:33.826578   70908 cri.go:89] found id: ""
	I0311 21:36:33.826606   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.826617   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:33.826624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:33.826684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:33.864854   70908 cri.go:89] found id: ""
	I0311 21:36:33.864875   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.864882   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:33.864887   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:33.864934   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:33.905366   70908 cri.go:89] found id: ""
	I0311 21:36:33.905397   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.905409   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:33.905416   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:33.905477   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:33.950196   70908 cri.go:89] found id: ""
	I0311 21:36:33.950222   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.950232   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:33.950243   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:33.950258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:34.001016   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:34.001049   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:34.059102   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:34.059131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:34.075879   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:34.075908   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:34.177114   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:34.177138   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:34.177161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:35.647611   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.147941   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:35.950147   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.449090   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.030072   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.527952   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.756459   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:36.772781   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:36.772867   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:36.820076   70908 cri.go:89] found id: ""
	I0311 21:36:36.820103   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.820111   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:36.820118   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:36.820169   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:36.859279   70908 cri.go:89] found id: ""
	I0311 21:36:36.859306   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.859317   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:36.859324   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:36.859383   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:36.899669   70908 cri.go:89] found id: ""
	I0311 21:36:36.899694   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.899705   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:36.899712   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:36.899770   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:36.938826   70908 cri.go:89] found id: ""
	I0311 21:36:36.938853   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.938864   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:36.938872   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:36.938957   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:36.976659   70908 cri.go:89] found id: ""
	I0311 21:36:36.976685   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.976693   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:36.976703   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:36.976772   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:37.015439   70908 cri.go:89] found id: ""
	I0311 21:36:37.015462   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.015469   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:37.015474   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:37.015519   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:37.057469   70908 cri.go:89] found id: ""
	I0311 21:36:37.057496   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.057507   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:37.057514   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:37.057579   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:37.106287   70908 cri.go:89] found id: ""
	I0311 21:36:37.106316   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.106325   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:37.106335   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:37.106352   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:37.122333   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:37.122367   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:37.197708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:37.197731   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:37.197742   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:37.281911   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:37.281944   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:37.335978   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:37.336011   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:39.891583   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:39.914741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:39.914823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:39.955751   70908 cri.go:89] found id: ""
	I0311 21:36:39.955773   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.955781   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:39.955786   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:39.955837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:39.997604   70908 cri.go:89] found id: ""
	I0311 21:36:39.997632   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.997642   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:39.997649   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:39.997711   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:40.039138   70908 cri.go:89] found id: ""
	I0311 21:36:40.039168   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.039178   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:40.039186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:40.039230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:40.079906   70908 cri.go:89] found id: ""
	I0311 21:36:40.079934   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.079945   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:40.079952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:40.080017   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:40.124116   70908 cri.go:89] found id: ""
	I0311 21:36:40.124141   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.124152   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:40.124159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:40.124221   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:40.165078   70908 cri.go:89] found id: ""
	I0311 21:36:40.165099   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.165108   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:40.165113   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:40.165158   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:40.203928   70908 cri.go:89] found id: ""
	I0311 21:36:40.203954   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.203962   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:40.203971   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:40.204018   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:40.244755   70908 cri.go:89] found id: ""
	I0311 21:36:40.244783   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.244793   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:40.244803   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:40.244819   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:40.302090   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:40.302125   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:40.318071   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:40.318097   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:40.405336   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:40.405363   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:40.405378   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:40.493262   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:40.493298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:40.148095   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.651483   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.449200   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.450259   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.528526   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.533619   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:45.029285   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:43.052419   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:43.068300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:43.068378   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:43.109665   70908 cri.go:89] found id: ""
	I0311 21:36:43.109701   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.109717   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:43.109725   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:43.109789   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:43.152233   70908 cri.go:89] found id: ""
	I0311 21:36:43.152253   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.152260   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:43.152265   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:43.152311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:43.194969   70908 cri.go:89] found id: ""
	I0311 21:36:43.194995   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.195002   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:43.195008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:43.195056   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:43.234555   70908 cri.go:89] found id: ""
	I0311 21:36:43.234581   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.234592   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:43.234597   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:43.234651   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:43.275188   70908 cri.go:89] found id: ""
	I0311 21:36:43.275214   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.275224   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:43.275232   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:43.275287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:43.314481   70908 cri.go:89] found id: ""
	I0311 21:36:43.314507   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.314515   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:43.314521   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:43.314580   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:43.353287   70908 cri.go:89] found id: ""
	I0311 21:36:43.353317   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.353328   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:43.353336   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:43.353395   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:43.396112   70908 cri.go:89] found id: ""
	I0311 21:36:43.396138   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.396150   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:43.396160   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:43.396175   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:43.456116   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:43.456143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:43.472992   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:43.473023   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:43.558281   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:43.558311   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:43.558327   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:43.641849   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:43.641885   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:45.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.147574   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:44.954864   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.450806   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.029669   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.529505   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:46.187444   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:46.202848   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:46.202911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:46.244843   70908 cri.go:89] found id: ""
	I0311 21:36:46.244872   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.244880   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:46.244886   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:46.244933   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:46.297789   70908 cri.go:89] found id: ""
	I0311 21:36:46.297820   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.297831   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:46.297838   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:46.297903   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:46.353104   70908 cri.go:89] found id: ""
	I0311 21:36:46.353127   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.353134   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:46.353140   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:46.353211   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:46.426767   70908 cri.go:89] found id: ""
	I0311 21:36:46.426792   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.426799   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:46.426804   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:46.426858   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:46.469850   70908 cri.go:89] found id: ""
	I0311 21:36:46.469881   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.469891   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:46.469899   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:46.469960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:46.510692   70908 cri.go:89] found id: ""
	I0311 21:36:46.510718   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.510726   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:46.510732   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:46.510787   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:46.554445   70908 cri.go:89] found id: ""
	I0311 21:36:46.554468   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.554475   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:46.554482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:46.554527   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:46.592417   70908 cri.go:89] found id: ""
	I0311 21:36:46.592448   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.592458   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:46.592467   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:46.592480   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:46.607106   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:46.607146   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:46.691556   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:46.691575   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:46.691587   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:46.772468   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:46.772503   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:46.814478   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:46.814512   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.368451   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:49.383504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:49.383573   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:49.427392   70908 cri.go:89] found id: ""
	I0311 21:36:49.427415   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.427426   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:49.427434   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:49.427493   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:49.469022   70908 cri.go:89] found id: ""
	I0311 21:36:49.469044   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.469052   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:49.469059   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:49.469106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:49.510755   70908 cri.go:89] found id: ""
	I0311 21:36:49.510781   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.510792   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:49.510800   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:49.510886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:49.556594   70908 cri.go:89] found id: ""
	I0311 21:36:49.556631   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.556642   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:49.556649   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:49.556710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:49.597035   70908 cri.go:89] found id: ""
	I0311 21:36:49.597059   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.597067   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:49.597072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:49.597138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:49.642947   70908 cri.go:89] found id: ""
	I0311 21:36:49.642975   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.642985   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:49.642993   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:49.643051   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:49.681401   70908 cri.go:89] found id: ""
	I0311 21:36:49.681423   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.681430   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:49.681435   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:49.681478   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:49.718498   70908 cri.go:89] found id: ""
	I0311 21:36:49.718529   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.718539   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:49.718549   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:49.718563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:49.764483   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:49.764515   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.821261   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:49.821293   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:49.837110   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:49.837135   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:49.918507   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:49.918529   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:49.918541   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:49.648198   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.146837   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.450941   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:51.950760   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.030288   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.528831   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.500354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:52.516722   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:52.516811   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:52.563312   70908 cri.go:89] found id: ""
	I0311 21:36:52.563340   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.563354   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:52.563362   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:52.563421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:52.603545   70908 cri.go:89] found id: ""
	I0311 21:36:52.603572   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.603581   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:52.603588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:52.603657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:52.645624   70908 cri.go:89] found id: ""
	I0311 21:36:52.645648   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.645658   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:52.645665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:52.645722   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:52.693335   70908 cri.go:89] found id: ""
	I0311 21:36:52.693363   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.693373   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:52.693380   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:52.693437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:52.740272   70908 cri.go:89] found id: ""
	I0311 21:36:52.740310   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.740331   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:52.740341   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:52.740398   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:52.786241   70908 cri.go:89] found id: ""
	I0311 21:36:52.786276   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.786285   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:52.786291   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:52.786355   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:52.825013   70908 cri.go:89] found id: ""
	I0311 21:36:52.825042   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.825053   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:52.825061   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:52.825117   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:52.862867   70908 cri.go:89] found id: ""
	I0311 21:36:52.862892   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.862901   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:52.862908   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:52.862922   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:52.917005   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:52.917036   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:52.932086   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:52.932112   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:53.012379   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:53.012402   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:53.012413   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:53.096881   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:53.096913   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:55.640142   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:55.656664   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:55.656749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:55.697962   70908 cri.go:89] found id: ""
	I0311 21:36:55.697992   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.698000   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:55.698005   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:55.698059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:55.741888   70908 cri.go:89] found id: ""
	I0311 21:36:55.741910   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.741917   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:55.741921   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:55.741965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:55.779352   70908 cri.go:89] found id: ""
	I0311 21:36:55.779372   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.779381   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:55.779386   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:55.779430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:55.819496   70908 cri.go:89] found id: ""
	I0311 21:36:55.819530   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.819541   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:55.819549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:55.819612   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:55.859384   70908 cri.go:89] found id: ""
	I0311 21:36:55.859412   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.859419   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:55.859424   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:55.859473   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:55.899415   70908 cri.go:89] found id: ""
	I0311 21:36:55.899438   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.899445   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:55.899450   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:55.899496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:55.938595   70908 cri.go:89] found id: ""
	I0311 21:36:55.938625   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.938637   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:55.938645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:55.938710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:55.980064   70908 cri.go:89] found id: ""
	I0311 21:36:55.980089   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.980096   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:55.980103   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:55.980115   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:55.996222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:55.996297   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:36:54.147743   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.150270   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.648829   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.450767   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.949091   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.950443   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.529184   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:59.029323   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	W0311 21:36:56.081046   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:56.081074   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:56.081090   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:56.167748   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:56.167773   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:56.221118   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:56.221150   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:58.772403   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:58.789349   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:58.789421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:58.829945   70908 cri.go:89] found id: ""
	I0311 21:36:58.829974   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.829985   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:58.829993   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:58.830059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:58.877190   70908 cri.go:89] found id: ""
	I0311 21:36:58.877214   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.877224   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:58.877231   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:58.877295   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:58.920086   70908 cri.go:89] found id: ""
	I0311 21:36:58.920113   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.920122   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:58.920128   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:58.920189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:58.956864   70908 cri.go:89] found id: ""
	I0311 21:36:58.956890   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.956900   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:58.956907   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:58.956967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:58.999363   70908 cri.go:89] found id: ""
	I0311 21:36:58.999390   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.999400   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:58.999408   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:58.999469   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:59.041759   70908 cri.go:89] found id: ""
	I0311 21:36:59.041787   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.041797   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:59.041803   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:59.041850   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:59.084378   70908 cri.go:89] found id: ""
	I0311 21:36:59.084406   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.084417   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:59.084425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:59.084479   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:59.124105   70908 cri.go:89] found id: ""
	I0311 21:36:59.124151   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.124163   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:59.124173   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:59.124188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:59.202060   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:59.202083   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:59.202098   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:59.284025   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:59.284060   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:59.327926   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:59.327951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:59.382505   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:59.382533   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:01.147260   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.149020   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.450230   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.949834   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.529173   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.532427   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.900084   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:01.914495   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:01.914552   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:01.956887   70908 cri.go:89] found id: ""
	I0311 21:37:01.956912   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.956922   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:01.956929   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:01.956986   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:01.995358   70908 cri.go:89] found id: ""
	I0311 21:37:01.995385   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.995394   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:01.995399   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:01.995448   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:02.033949   70908 cri.go:89] found id: ""
	I0311 21:37:02.033974   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.033984   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:02.033991   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:02.034049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:02.074348   70908 cri.go:89] found id: ""
	I0311 21:37:02.074372   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.074382   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:02.074390   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:02.074449   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:02.112456   70908 cri.go:89] found id: ""
	I0311 21:37:02.112479   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.112486   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:02.112491   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:02.112554   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:02.155102   70908 cri.go:89] found id: ""
	I0311 21:37:02.155130   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.155138   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:02.155149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:02.155205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:02.191359   70908 cri.go:89] found id: ""
	I0311 21:37:02.191386   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.191393   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:02.191399   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:02.191450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:02.236178   70908 cri.go:89] found id: ""
	I0311 21:37:02.236203   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.236211   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:02.236220   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:02.236231   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:02.285794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:02.285818   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:02.342348   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:02.342387   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:02.357230   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:02.357257   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:02.431044   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:02.431064   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:02.431076   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.019473   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:05.035841   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:05.035901   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:05.082013   70908 cri.go:89] found id: ""
	I0311 21:37:05.082034   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.082041   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:05.082046   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:05.082091   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:05.126236   70908 cri.go:89] found id: ""
	I0311 21:37:05.126257   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.126265   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:05.126270   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:05.126311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:05.170573   70908 cri.go:89] found id: ""
	I0311 21:37:05.170601   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.170608   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:05.170614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:05.170658   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:05.213921   70908 cri.go:89] found id: ""
	I0311 21:37:05.213948   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.213958   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:05.213965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:05.214025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:05.261178   70908 cri.go:89] found id: ""
	I0311 21:37:05.261206   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.261213   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:05.261221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:05.261273   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:05.306007   70908 cri.go:89] found id: ""
	I0311 21:37:05.306037   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.306045   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:05.306051   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:05.306106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:05.346653   70908 cri.go:89] found id: ""
	I0311 21:37:05.346679   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.346688   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:05.346694   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:05.346752   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:05.384587   70908 cri.go:89] found id: ""
	I0311 21:37:05.384626   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.384637   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:05.384648   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:05.384664   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:05.440676   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:05.440709   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:05.456989   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:05.457018   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:05.553900   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:05.553932   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:05.553947   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.633270   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:05.633300   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:05.647077   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.146975   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.449502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.450008   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.028642   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.529826   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.181935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:08.198179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:08.198251   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:08.236484   70908 cri.go:89] found id: ""
	I0311 21:37:08.236506   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.236516   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:08.236524   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:08.236578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:08.277701   70908 cri.go:89] found id: ""
	I0311 21:37:08.277731   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.277739   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:08.277745   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:08.277804   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:08.319559   70908 cri.go:89] found id: ""
	I0311 21:37:08.319585   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.319596   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:08.319604   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:08.319666   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:08.359752   70908 cri.go:89] found id: ""
	I0311 21:37:08.359777   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.359785   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:08.359791   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:08.359849   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:08.397432   70908 cri.go:89] found id: ""
	I0311 21:37:08.397453   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.397460   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:08.397465   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:08.397511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:08.438708   70908 cri.go:89] found id: ""
	I0311 21:37:08.438732   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.438742   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:08.438749   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:08.438807   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:08.479511   70908 cri.go:89] found id: ""
	I0311 21:37:08.479533   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.479560   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:08.479566   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:08.479620   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:08.521634   70908 cri.go:89] found id: ""
	I0311 21:37:08.521659   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.521670   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:08.521680   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:08.521693   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:08.577033   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:08.577065   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:08.592006   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:08.592030   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:08.680862   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:08.680903   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:08.680919   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:08.764991   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:08.765037   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:10.147819   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.648352   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:10.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.949571   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.028245   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:13.028689   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:15.034232   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.313168   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:11.326808   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:11.326876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:11.364223   70908 cri.go:89] found id: ""
	I0311 21:37:11.364246   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.364254   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:11.364259   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:11.364311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:11.401361   70908 cri.go:89] found id: ""
	I0311 21:37:11.401391   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.401402   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:11.401409   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:11.401459   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:11.441927   70908 cri.go:89] found id: ""
	I0311 21:37:11.441950   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.441957   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:11.441962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:11.442015   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:11.480804   70908 cri.go:89] found id: ""
	I0311 21:37:11.480836   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.480847   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:11.480855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:11.480913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:11.520135   70908 cri.go:89] found id: ""
	I0311 21:37:11.520166   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.520177   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:11.520193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:11.520255   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:11.559214   70908 cri.go:89] found id: ""
	I0311 21:37:11.559244   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.559255   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:11.559263   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:11.559322   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:11.597346   70908 cri.go:89] found id: ""
	I0311 21:37:11.597374   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.597383   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:11.597391   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:11.597452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:11.646095   70908 cri.go:89] found id: ""
	I0311 21:37:11.646118   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.646127   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:11.646137   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:11.646167   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:11.691813   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:11.691844   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:11.745270   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:11.745303   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:11.761107   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:11.761131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:11.841033   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:11.841059   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:11.841074   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:14.431709   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:14.447064   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:14.447131   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:14.493094   70908 cri.go:89] found id: ""
	I0311 21:37:14.493132   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.493140   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:14.493146   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:14.493195   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:14.537391   70908 cri.go:89] found id: ""
	I0311 21:37:14.537415   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.537423   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:14.537428   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:14.537487   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:14.576284   70908 cri.go:89] found id: ""
	I0311 21:37:14.576306   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.576313   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:14.576319   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:14.576375   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:14.627057   70908 cri.go:89] found id: ""
	I0311 21:37:14.627086   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.627097   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:14.627105   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:14.627163   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:14.669204   70908 cri.go:89] found id: ""
	I0311 21:37:14.669226   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.669233   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:14.669238   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:14.669293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:14.708787   70908 cri.go:89] found id: ""
	I0311 21:37:14.708812   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.708820   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:14.708826   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:14.708892   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:14.749795   70908 cri.go:89] found id: ""
	I0311 21:37:14.749819   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.749828   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:14.749835   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:14.749893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:14.794871   70908 cri.go:89] found id: ""
	I0311 21:37:14.794900   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.794911   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:14.794922   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:14.794936   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:14.850022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:14.850050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:14.866589   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:14.866618   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:14.968887   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:14.968906   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:14.968921   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:15.047376   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:15.047404   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:14.648528   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:16.649275   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:18.649842   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:14.951387   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.451239   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.529411   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:20.030012   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.599834   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:17.613610   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:17.613665   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:17.655340   70908 cri.go:89] found id: ""
	I0311 21:37:17.655361   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.655369   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:17.655374   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:17.655416   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:17.695071   70908 cri.go:89] found id: ""
	I0311 21:37:17.695103   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.695114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:17.695121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:17.695178   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:17.731914   70908 cri.go:89] found id: ""
	I0311 21:37:17.731938   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.731946   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:17.731952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:17.732012   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:17.768198   70908 cri.go:89] found id: ""
	I0311 21:37:17.768224   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.768236   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:17.768242   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:17.768301   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:17.802881   70908 cri.go:89] found id: ""
	I0311 21:37:17.802909   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.802920   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:17.802928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:17.802983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:17.841660   70908 cri.go:89] found id: ""
	I0311 21:37:17.841684   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.841692   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:17.841698   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:17.841749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:17.880154   70908 cri.go:89] found id: ""
	I0311 21:37:17.880183   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.880196   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:17.880205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:17.880260   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:17.919797   70908 cri.go:89] found id: ""
	I0311 21:37:17.919822   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.919829   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:17.919837   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:17.919847   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:17.976607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:17.976636   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:17.993313   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:17.993339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:18.069928   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:18.069956   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:18.069973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:18.152257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:18.152285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:20.706553   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:20.721148   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:20.721214   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:20.762913   70908 cri.go:89] found id: ""
	I0311 21:37:20.762935   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.762943   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:20.762952   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:20.762997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:20.811120   70908 cri.go:89] found id: ""
	I0311 21:37:20.811147   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.811158   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:20.811165   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:20.811225   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:20.848987   70908 cri.go:89] found id: ""
	I0311 21:37:20.849015   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.849026   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:20.849033   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:20.849098   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:20.896201   70908 cri.go:89] found id: ""
	I0311 21:37:20.896226   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.896233   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:20.896240   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:20.896299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:20.936570   70908 cri.go:89] found id: ""
	I0311 21:37:20.936595   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.936603   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:20.936608   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:20.936657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:20.977535   70908 cri.go:89] found id: ""
	I0311 21:37:20.977565   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.977576   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:20.977584   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:20.977647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:21.015370   70908 cri.go:89] found id: ""
	I0311 21:37:21.015395   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.015405   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:21.015413   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:21.015472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:21.146868   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:23.147272   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:19.950972   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.450298   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.528109   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.530216   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:21.056190   70908 cri.go:89] found id: ""
	I0311 21:37:21.056214   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.056225   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:21.056235   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:21.056255   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:21.112022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:21.112051   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:21.128841   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:21.128872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:21.209690   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:21.209716   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:21.209732   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:21.291064   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:21.291099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:23.844334   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:23.860000   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:23.860061   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:23.899777   70908 cri.go:89] found id: ""
	I0311 21:37:23.899805   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.899814   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:23.899820   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:23.899879   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:23.941510   70908 cri.go:89] found id: ""
	I0311 21:37:23.941537   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.941547   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:23.941555   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:23.941627   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:23.980564   70908 cri.go:89] found id: ""
	I0311 21:37:23.980592   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.980602   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:23.980614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:23.980676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:24.020310   70908 cri.go:89] found id: ""
	I0311 21:37:24.020337   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.020348   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:24.020354   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:24.020410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:24.059320   70908 cri.go:89] found id: ""
	I0311 21:37:24.059349   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.059359   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:24.059367   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:24.059424   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:24.096625   70908 cri.go:89] found id: ""
	I0311 21:37:24.096652   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.096660   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:24.096666   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:24.096723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:24.137068   70908 cri.go:89] found id: ""
	I0311 21:37:24.137100   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.137112   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:24.137121   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:24.137182   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:24.181298   70908 cri.go:89] found id: ""
	I0311 21:37:24.181325   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.181336   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:24.181348   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:24.181364   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:24.265423   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:24.265454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:24.318088   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:24.318113   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:24.374402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:24.374430   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:24.388934   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:24.388962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:24.475842   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:25.647164   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.650157   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.948984   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.949444   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:28.950697   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.030240   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:29.030848   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.976017   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:26.991533   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:26.991602   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:27.034750   70908 cri.go:89] found id: ""
	I0311 21:37:27.034769   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.034776   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:27.034781   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:27.034837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:27.073275   70908 cri.go:89] found id: ""
	I0311 21:37:27.073301   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.073309   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:27.073317   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:27.073363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:27.113396   70908 cri.go:89] found id: ""
	I0311 21:37:27.113418   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.113425   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:27.113431   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:27.113482   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:27.157442   70908 cri.go:89] found id: ""
	I0311 21:37:27.157465   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.157475   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:27.157482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:27.157534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:27.197277   70908 cri.go:89] found id: ""
	I0311 21:37:27.197302   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.197309   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:27.197315   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:27.197363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:27.237967   70908 cri.go:89] found id: ""
	I0311 21:37:27.237991   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.237999   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:27.238005   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:27.238077   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:27.280434   70908 cri.go:89] found id: ""
	I0311 21:37:27.280459   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.280467   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:27.280472   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:27.280535   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:27.334940   70908 cri.go:89] found id: ""
	I0311 21:37:27.334970   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.334982   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:27.334992   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:27.335010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:27.402535   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:27.402570   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:27.416758   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:27.416787   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:27.492762   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:27.492786   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:27.492803   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:27.576989   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:27.577032   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.124039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:30.138419   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:30.138483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:30.180900   70908 cri.go:89] found id: ""
	I0311 21:37:30.180926   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.180936   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:30.180944   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:30.180998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:30.222886   70908 cri.go:89] found id: ""
	I0311 21:37:30.222913   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.222921   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:30.222926   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:30.222976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:30.264332   70908 cri.go:89] found id: ""
	I0311 21:37:30.264357   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.264367   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:30.264376   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:30.264436   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:30.307084   70908 cri.go:89] found id: ""
	I0311 21:37:30.307112   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.307123   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:30.307130   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:30.307188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:30.345954   70908 cri.go:89] found id: ""
	I0311 21:37:30.345979   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.345990   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:30.345997   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:30.346057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:30.389408   70908 cri.go:89] found id: ""
	I0311 21:37:30.389439   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.389450   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:30.389457   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:30.389517   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:30.438380   70908 cri.go:89] found id: ""
	I0311 21:37:30.438410   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.438420   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:30.438427   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:30.438489   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:30.479860   70908 cri.go:89] found id: ""
	I0311 21:37:30.479884   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.479895   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:30.479906   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:30.479920   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:30.535831   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:30.535857   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:30.552702   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:30.552725   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:30.633417   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:30.633439   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:30.633454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:30.723106   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:30.723143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.147993   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:32.152839   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.450942   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.949947   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.528469   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.529721   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.270654   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:33.296640   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:33.296710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:33.366053   70908 cri.go:89] found id: ""
	I0311 21:37:33.366082   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.366093   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:33.366101   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:33.366161   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:33.421455   70908 cri.go:89] found id: ""
	I0311 21:37:33.421488   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.421501   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:33.421509   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:33.421583   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:33.464555   70908 cri.go:89] found id: ""
	I0311 21:37:33.464579   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.464586   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:33.464592   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:33.464647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:33.507044   70908 cri.go:89] found id: ""
	I0311 21:37:33.507086   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.507100   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:33.507110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:33.507175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:33.561446   70908 cri.go:89] found id: ""
	I0311 21:37:33.561518   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.561532   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:33.561540   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:33.561601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:33.604496   70908 cri.go:89] found id: ""
	I0311 21:37:33.604519   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.604528   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:33.604534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:33.604591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:33.645754   70908 cri.go:89] found id: ""
	I0311 21:37:33.645781   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.645791   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:33.645797   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:33.645869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:33.690041   70908 cri.go:89] found id: ""
	I0311 21:37:33.690071   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.690082   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:33.690092   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:33.690108   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:33.765708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:33.765737   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:33.765752   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:33.848869   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:33.848906   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:33.900191   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:33.900223   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:33.957101   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:33.957138   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:34.646831   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.647640   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.449429   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.948831   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.028141   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.028588   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.028676   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.474442   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:36.490159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:36.490231   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:36.537784   70908 cri.go:89] found id: ""
	I0311 21:37:36.537812   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.537822   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:36.537829   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:36.537885   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:36.581192   70908 cri.go:89] found id: ""
	I0311 21:37:36.581219   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.581230   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:36.581237   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:36.581297   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:36.620448   70908 cri.go:89] found id: ""
	I0311 21:37:36.620480   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.620492   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:36.620501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:36.620566   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:36.662135   70908 cri.go:89] found id: ""
	I0311 21:37:36.662182   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.662193   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:36.662203   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:36.662268   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:36.708138   70908 cri.go:89] found id: ""
	I0311 21:37:36.708178   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.708188   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:36.708198   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:36.708267   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:36.749668   70908 cri.go:89] found id: ""
	I0311 21:37:36.749697   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.749708   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:36.749717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:36.749783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:36.788455   70908 cri.go:89] found id: ""
	I0311 21:37:36.788476   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.788483   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:36.788488   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:36.788534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:36.830216   70908 cri.go:89] found id: ""
	I0311 21:37:36.830244   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.830257   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:36.830267   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:36.830285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:36.915306   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:36.915336   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:36.958861   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:36.958892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:37.014463   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:37.014489   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:37.029979   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:37.030010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:37.106840   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:39.607929   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:39.626247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:39.626307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:39.667409   70908 cri.go:89] found id: ""
	I0311 21:37:39.667436   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.667446   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:39.667454   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:39.667509   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:39.714167   70908 cri.go:89] found id: ""
	I0311 21:37:39.714198   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.714210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:39.714217   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:39.714275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:39.754759   70908 cri.go:89] found id: ""
	I0311 21:37:39.754787   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.754798   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:39.754805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:39.754865   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:39.794999   70908 cri.go:89] found id: ""
	I0311 21:37:39.795028   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.795038   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:39.795045   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:39.795108   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:39.836284   70908 cri.go:89] found id: ""
	I0311 21:37:39.836310   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.836321   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:39.836328   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:39.836386   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:39.876487   70908 cri.go:89] found id: ""
	I0311 21:37:39.876518   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.876530   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:39.876539   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:39.876601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:39.918750   70908 cri.go:89] found id: ""
	I0311 21:37:39.918785   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.918796   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:39.918813   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:39.918871   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:39.958486   70908 cri.go:89] found id: ""
	I0311 21:37:39.958517   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.958529   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:39.958537   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:39.958550   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:39.973899   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:39.973925   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:40.055954   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:40.055980   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:40.055995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:40.144801   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:40.144826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:40.189692   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:40.189722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:39.148581   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:41.647869   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:43.648550   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.949502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.951277   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.528844   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:44.529317   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.748909   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:42.763794   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:42.763877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:42.801470   70908 cri.go:89] found id: ""
	I0311 21:37:42.801493   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.801500   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:42.801506   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:42.801561   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:42.846267   70908 cri.go:89] found id: ""
	I0311 21:37:42.846294   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.846301   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:42.846307   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:42.846357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:42.890257   70908 cri.go:89] found id: ""
	I0311 21:37:42.890283   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.890294   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:42.890301   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:42.890357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:42.933605   70908 cri.go:89] found id: ""
	I0311 21:37:42.933628   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.933636   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:42.933643   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:42.933699   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:42.979020   70908 cri.go:89] found id: ""
	I0311 21:37:42.979043   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.979052   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:42.979059   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:42.979122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:43.021695   70908 cri.go:89] found id: ""
	I0311 21:37:43.021724   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.021734   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:43.021741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:43.021801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:43.064356   70908 cri.go:89] found id: ""
	I0311 21:37:43.064398   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.064406   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:43.064412   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:43.064457   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:43.101878   70908 cri.go:89] found id: ""
	I0311 21:37:43.101901   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.101909   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:43.101917   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:43.101930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:43.185836   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:43.185861   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:43.185874   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:43.268879   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:43.268912   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:43.319582   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:43.319614   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:43.374996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:43.375022   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:45.890408   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:45.905973   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:45.906041   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:45.951994   70908 cri.go:89] found id: ""
	I0311 21:37:45.952025   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.952040   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:45.952049   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:45.952112   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:45.992913   70908 cri.go:89] found id: ""
	I0311 21:37:45.992953   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.992964   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:45.992971   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:45.993034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:46.036306   70908 cri.go:89] found id: ""
	I0311 21:37:46.036334   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.036345   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:46.036353   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:46.036410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:46.147754   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:48.647534   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:45.450180   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:47.949568   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.532244   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.028905   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.077532   70908 cri.go:89] found id: ""
	I0311 21:37:46.077564   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.077576   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:46.077583   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:46.077633   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:46.115953   70908 cri.go:89] found id: ""
	I0311 21:37:46.115976   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.115983   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:46.115990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:46.116072   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:46.155665   70908 cri.go:89] found id: ""
	I0311 21:37:46.155699   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.155709   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:46.155717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:46.155775   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:46.197650   70908 cri.go:89] found id: ""
	I0311 21:37:46.197677   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.197696   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:46.197705   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:46.197766   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:46.243006   70908 cri.go:89] found id: ""
	I0311 21:37:46.243030   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.243037   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:46.243045   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:46.243058   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:46.294668   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:46.294696   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:46.308700   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:46.308721   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:46.387188   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:46.387207   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:46.387219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:46.480390   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:46.480423   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.027202   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:49.042292   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:49.042361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:49.081547   70908 cri.go:89] found id: ""
	I0311 21:37:49.081568   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.081579   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:49.081585   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:49.081632   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:49.127438   70908 cri.go:89] found id: ""
	I0311 21:37:49.127467   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.127477   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:49.127485   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:49.127545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:49.173992   70908 cri.go:89] found id: ""
	I0311 21:37:49.174024   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.174033   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:49.174042   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:49.174114   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:49.217087   70908 cri.go:89] found id: ""
	I0311 21:37:49.217120   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.217130   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:49.217138   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:49.217198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:49.255929   70908 cri.go:89] found id: ""
	I0311 21:37:49.255955   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.255970   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:49.255978   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:49.256037   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:49.296373   70908 cri.go:89] found id: ""
	I0311 21:37:49.296399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.296409   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:49.296417   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:49.296474   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:49.335063   70908 cri.go:89] found id: ""
	I0311 21:37:49.335092   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.335103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:49.335110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:49.335176   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:49.378374   70908 cri.go:89] found id: ""
	I0311 21:37:49.378399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.378406   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:49.378414   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:49.378427   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.422193   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:49.422220   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:49.474861   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:49.474893   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:49.490193   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:49.490219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:49.571857   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:49.571880   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:49.571895   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:51.149814   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.648033   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.949603   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.949943   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.951963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.531753   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:54.028723   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:52.168934   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:52.183086   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:52.183154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:52.221632   70908 cri.go:89] found id: ""
	I0311 21:37:52.221664   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.221675   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:52.221682   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:52.221743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:52.261550   70908 cri.go:89] found id: ""
	I0311 21:37:52.261575   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.261582   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:52.261588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:52.261638   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:52.302879   70908 cri.go:89] found id: ""
	I0311 21:37:52.302910   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.302920   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:52.302927   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:52.302987   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:52.346462   70908 cri.go:89] found id: ""
	I0311 21:37:52.346485   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.346494   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:52.346499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:52.346551   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:52.387949   70908 cri.go:89] found id: ""
	I0311 21:37:52.387977   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.387988   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:52.387995   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:52.388052   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:52.428527   70908 cri.go:89] found id: ""
	I0311 21:37:52.428564   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.428574   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:52.428582   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:52.428649   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:52.469516   70908 cri.go:89] found id: ""
	I0311 21:37:52.469548   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.469558   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:52.469565   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:52.469616   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:52.508371   70908 cri.go:89] found id: ""
	I0311 21:37:52.508407   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.508417   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:52.508429   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:52.508444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:52.587309   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:52.587346   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:52.587361   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:52.666419   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:52.666449   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:52.713150   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:52.713184   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:52.768011   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:52.768041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.284835   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:55.298742   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:55.298799   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:55.340215   70908 cri.go:89] found id: ""
	I0311 21:37:55.340240   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.340251   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:55.340257   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:55.340321   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:55.377930   70908 cri.go:89] found id: ""
	I0311 21:37:55.377956   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.377967   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:55.377974   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:55.378039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:55.418786   70908 cri.go:89] found id: ""
	I0311 21:37:55.418814   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.418822   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:55.418827   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:55.418883   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:55.461566   70908 cri.go:89] found id: ""
	I0311 21:37:55.461586   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.461593   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:55.461601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:55.461655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:55.502917   70908 cri.go:89] found id: ""
	I0311 21:37:55.502945   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.502955   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:55.502962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:55.503022   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:55.551417   70908 cri.go:89] found id: ""
	I0311 21:37:55.551441   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.551454   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:55.551462   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:55.551514   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:55.596060   70908 cri.go:89] found id: ""
	I0311 21:37:55.596092   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.596103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:55.596111   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:55.596172   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:55.635495   70908 cri.go:89] found id: ""
	I0311 21:37:55.635523   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.635535   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:55.635547   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:55.635564   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:55.691705   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:55.691735   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.707696   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:55.707718   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:55.780432   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:55.780452   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:55.780465   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:55.866033   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:55.866067   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:55.648873   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.452135   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.951150   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.528533   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.529769   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.437299   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:58.453058   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:58.453125   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:58.493317   70908 cri.go:89] found id: ""
	I0311 21:37:58.493339   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.493347   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:58.493353   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:58.493408   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:58.543533   70908 cri.go:89] found id: ""
	I0311 21:37:58.543556   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.543567   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:58.543578   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:58.543634   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:58.585255   70908 cri.go:89] found id: ""
	I0311 21:37:58.585282   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.585292   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:58.585300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:58.585359   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:58.622393   70908 cri.go:89] found id: ""
	I0311 21:37:58.622421   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.622428   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:58.622434   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:58.622501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:58.661939   70908 cri.go:89] found id: ""
	I0311 21:37:58.661963   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.661971   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:58.661977   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:58.662034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:58.703628   70908 cri.go:89] found id: ""
	I0311 21:37:58.703663   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.703674   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:58.703682   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:58.703743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:58.742553   70908 cri.go:89] found id: ""
	I0311 21:37:58.742583   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.742594   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:58.742601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:58.742662   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:58.785016   70908 cri.go:89] found id: ""
	I0311 21:37:58.785040   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.785047   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:58.785055   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:58.785071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:58.857757   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:58.857773   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:58.857786   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:58.946120   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:58.946148   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:58.996288   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:58.996328   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:59.055371   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:59.055407   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:00.651621   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.149663   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:00.951776   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.451012   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.028303   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.028600   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.032276   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.571092   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:01.591149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:01.591238   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:01.629156   70908 cri.go:89] found id: ""
	I0311 21:38:01.629184   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.629196   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:01.629203   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:01.629261   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:01.673656   70908 cri.go:89] found id: ""
	I0311 21:38:01.673680   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.673687   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:01.673692   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:01.673739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:01.713361   70908 cri.go:89] found id: ""
	I0311 21:38:01.713389   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.713397   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:01.713403   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:01.713450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:01.757256   70908 cri.go:89] found id: ""
	I0311 21:38:01.757286   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.757298   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:01.757305   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:01.757362   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:01.797538   70908 cri.go:89] found id: ""
	I0311 21:38:01.797565   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.797573   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:01.797580   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:01.797635   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:01.838664   70908 cri.go:89] found id: ""
	I0311 21:38:01.838692   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.838701   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:01.838707   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:01.838754   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:01.893638   70908 cri.go:89] found id: ""
	I0311 21:38:01.893668   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.893679   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:01.893686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:01.893747   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:01.935547   70908 cri.go:89] found id: ""
	I0311 21:38:01.935569   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.935577   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:01.935585   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:01.935596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:01.989964   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:01.989988   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:02.004949   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:02.004973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:02.082006   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:02.082024   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:02.082041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:02.171040   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:02.171072   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:04.724699   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:04.741445   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:04.741512   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:04.783924   70908 cri.go:89] found id: ""
	I0311 21:38:04.783951   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.783962   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:04.783969   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:04.784028   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:04.825806   70908 cri.go:89] found id: ""
	I0311 21:38:04.825835   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.825845   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:04.825852   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:04.825913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:04.864070   70908 cri.go:89] found id: ""
	I0311 21:38:04.864106   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.864118   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:04.864126   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:04.864181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:04.901735   70908 cri.go:89] found id: ""
	I0311 21:38:04.901759   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.901769   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:04.901777   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:04.901832   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:04.941473   70908 cri.go:89] found id: ""
	I0311 21:38:04.941496   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.941505   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:04.941513   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:04.941569   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:04.993132   70908 cri.go:89] found id: ""
	I0311 21:38:04.993162   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.993170   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:04.993178   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:04.993237   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:05.037925   70908 cri.go:89] found id: ""
	I0311 21:38:05.037950   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.037960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:05.037967   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:05.038026   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:05.080726   70908 cri.go:89] found id: ""
	I0311 21:38:05.080773   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.080784   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:05.080794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:05.080806   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:05.138205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:05.138233   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:05.155048   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:05.155071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:05.233067   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:05.233086   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:05.233099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:05.317897   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:05.317928   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:05.646661   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.647686   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.949900   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.950261   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.528049   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:09.530724   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.863484   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:07.877342   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:07.877411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:07.916352   70908 cri.go:89] found id: ""
	I0311 21:38:07.916374   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.916383   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:07.916391   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:07.916454   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:07.954833   70908 cri.go:89] found id: ""
	I0311 21:38:07.954854   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.954863   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:07.954870   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:07.954926   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:07.993124   70908 cri.go:89] found id: ""
	I0311 21:38:07.993152   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.993161   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:07.993168   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:07.993232   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:08.039081   70908 cri.go:89] found id: ""
	I0311 21:38:08.039108   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.039118   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:08.039125   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:08.039191   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:08.084627   70908 cri.go:89] found id: ""
	I0311 21:38:08.084650   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.084658   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:08.084665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:08.084712   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:08.125986   70908 cri.go:89] found id: ""
	I0311 21:38:08.126015   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.126026   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:08.126034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:08.126080   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:08.167149   70908 cri.go:89] found id: ""
	I0311 21:38:08.167176   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.167188   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:08.167193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:08.167252   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:08.204988   70908 cri.go:89] found id: ""
	I0311 21:38:08.205012   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.205020   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:08.205028   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:08.205043   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:08.295226   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:08.295268   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:08.357789   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:08.357820   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:08.434091   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:08.434132   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:08.455208   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:08.455240   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:08.529620   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.030060   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:09.648047   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.649628   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:13.652370   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:10.450139   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:12.949551   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.531354   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.029703   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.044303   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:11.046353   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:11.088067   70908 cri.go:89] found id: ""
	I0311 21:38:11.088099   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.088110   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:11.088117   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:11.088177   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:11.131077   70908 cri.go:89] found id: ""
	I0311 21:38:11.131104   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.131114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:11.131121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:11.131181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:11.172409   70908 cri.go:89] found id: ""
	I0311 21:38:11.172431   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.172439   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:11.172444   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:11.172496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:11.216775   70908 cri.go:89] found id: ""
	I0311 21:38:11.216817   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.216825   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:11.216830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:11.216886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:11.255105   70908 cri.go:89] found id: ""
	I0311 21:38:11.255129   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.255137   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:11.255142   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:11.255205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:11.292397   70908 cri.go:89] found id: ""
	I0311 21:38:11.292429   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.292440   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:11.292448   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:11.292518   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:11.330376   70908 cri.go:89] found id: ""
	I0311 21:38:11.330397   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.330408   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:11.330415   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:11.330476   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:11.367699   70908 cri.go:89] found id: ""
	I0311 21:38:11.367727   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.367737   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:11.367748   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:11.367763   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:11.421847   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:11.421876   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:11.437570   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:11.437593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:11.522084   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.522108   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:11.522123   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:11.606181   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:11.606228   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.153952   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:14.175726   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:14.175798   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:14.221752   70908 cri.go:89] found id: ""
	I0311 21:38:14.221784   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.221798   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:14.221807   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:14.221895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:14.286690   70908 cri.go:89] found id: ""
	I0311 21:38:14.286720   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.286740   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:14.286757   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:14.286824   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:14.343764   70908 cri.go:89] found id: ""
	I0311 21:38:14.343790   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.343799   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:14.343806   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:14.343876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:14.381198   70908 cri.go:89] found id: ""
	I0311 21:38:14.381220   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.381230   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:14.381237   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:14.381307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:14.421578   70908 cri.go:89] found id: ""
	I0311 21:38:14.421603   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.421613   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:14.421620   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:14.421678   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:14.462945   70908 cri.go:89] found id: ""
	I0311 21:38:14.462972   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.462982   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:14.462990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:14.463049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:14.503503   70908 cri.go:89] found id: ""
	I0311 21:38:14.503532   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.503543   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:14.503550   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:14.503610   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:14.543987   70908 cri.go:89] found id: ""
	I0311 21:38:14.544021   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.544034   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:14.544045   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:14.544062   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:14.624781   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:14.624804   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:14.624821   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:14.707130   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:14.707161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.750815   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:14.750848   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:14.806855   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:14.806882   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:16.149516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.646716   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.949827   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.953660   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.031935   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.529085   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:17.325267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:17.340421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:17.340483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:17.382808   70908 cri.go:89] found id: ""
	I0311 21:38:17.382831   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.382841   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:17.382849   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:17.382906   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:17.424838   70908 cri.go:89] found id: ""
	I0311 21:38:17.424865   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.424875   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:17.424883   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:17.424940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:17.466298   70908 cri.go:89] found id: ""
	I0311 21:38:17.466320   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.466327   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:17.466333   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:17.466397   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:17.506648   70908 cri.go:89] found id: ""
	I0311 21:38:17.506678   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.506685   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:17.506691   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:17.506739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:17.544019   70908 cri.go:89] found id: ""
	I0311 21:38:17.544048   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.544057   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:17.544067   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:17.544154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:17.583691   70908 cri.go:89] found id: ""
	I0311 21:38:17.583710   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.583717   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:17.583723   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:17.583768   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:17.624432   70908 cri.go:89] found id: ""
	I0311 21:38:17.624453   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.624460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:17.624466   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:17.624516   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:17.663253   70908 cri.go:89] found id: ""
	I0311 21:38:17.663294   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.663312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:17.663322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:17.663339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:17.749928   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:17.749962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:17.792817   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:17.792853   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:17.847391   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:17.847419   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:17.862813   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:17.862835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:17.935307   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.435995   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:20.452441   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:20.452510   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:20.491960   70908 cri.go:89] found id: ""
	I0311 21:38:20.491985   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.491992   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:20.491998   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:20.492045   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:20.531679   70908 cri.go:89] found id: ""
	I0311 21:38:20.531700   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.531707   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:20.531712   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:20.531764   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:20.571666   70908 cri.go:89] found id: ""
	I0311 21:38:20.571687   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.571694   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:20.571699   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:20.571762   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:20.611165   70908 cri.go:89] found id: ""
	I0311 21:38:20.611187   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.611194   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:20.611199   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:20.611248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:20.648680   70908 cri.go:89] found id: ""
	I0311 21:38:20.648709   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.648720   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:20.648728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:20.648801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:20.690177   70908 cri.go:89] found id: ""
	I0311 21:38:20.690204   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.690215   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:20.690222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:20.690298   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:20.728918   70908 cri.go:89] found id: ""
	I0311 21:38:20.728949   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.728960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:20.728968   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:20.729039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:20.773559   70908 cri.go:89] found id: ""
	I0311 21:38:20.773586   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.773596   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:20.773607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:20.773623   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:20.788709   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:20.788750   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:20.869832   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.869856   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:20.869868   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:20.963515   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:20.963544   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:21.007029   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:21.007055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:21.147703   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.660410   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:19.449416   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:21.451194   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.950401   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:20.529497   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:22.529947   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:25.030431   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.566134   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:23.583855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:23.583911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:23.623605   70908 cri.go:89] found id: ""
	I0311 21:38:23.623633   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.623656   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:23.623664   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:23.623719   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:23.663058   70908 cri.go:89] found id: ""
	I0311 21:38:23.663081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.663091   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:23.663098   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:23.663157   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:23.701930   70908 cri.go:89] found id: ""
	I0311 21:38:23.701963   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.701975   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:23.701985   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:23.702049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:23.743925   70908 cri.go:89] found id: ""
	I0311 21:38:23.743955   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.743964   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:23.743970   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:23.744046   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:23.784030   70908 cri.go:89] found id: ""
	I0311 21:38:23.784055   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.784066   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:23.784073   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:23.784132   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:23.823054   70908 cri.go:89] found id: ""
	I0311 21:38:23.823081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.823089   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:23.823097   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:23.823156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:23.863629   70908 cri.go:89] found id: ""
	I0311 21:38:23.863654   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.863662   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:23.863668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:23.863724   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:23.904429   70908 cri.go:89] found id: ""
	I0311 21:38:23.904454   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.904462   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:23.904470   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:23.904481   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:23.962356   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:23.962393   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:23.977667   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:23.977689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:24.068791   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:24.068820   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:24.068835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:24.157857   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:24.157892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:26.147447   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.148069   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.450243   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.950495   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:27.530194   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:30.029286   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.705872   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:26.720840   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:26.720936   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:26.766449   70908 cri.go:89] found id: ""
	I0311 21:38:26.766480   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.766490   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:26.766496   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:26.766557   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:26.806179   70908 cri.go:89] found id: ""
	I0311 21:38:26.806203   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.806210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:26.806216   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:26.806275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:26.850737   70908 cri.go:89] found id: ""
	I0311 21:38:26.850765   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.850775   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:26.850785   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:26.850845   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:26.897694   70908 cri.go:89] found id: ""
	I0311 21:38:26.897722   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.897733   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:26.897744   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:26.897802   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:26.940940   70908 cri.go:89] found id: ""
	I0311 21:38:26.940962   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.940969   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:26.940975   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:26.941021   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:26.978576   70908 cri.go:89] found id: ""
	I0311 21:38:26.978604   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.978614   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:26.978625   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:26.978682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:27.016331   70908 cri.go:89] found id: ""
	I0311 21:38:27.016363   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.016374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:27.016381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:27.016439   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:27.061541   70908 cri.go:89] found id: ""
	I0311 21:38:27.061569   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.061580   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:27.061590   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:27.061609   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:27.154977   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:27.155017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:27.204458   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:27.204488   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:27.259960   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:27.259997   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:27.277806   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:27.277832   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:27.356111   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:29.856828   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:29.871331   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:29.871413   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:29.912867   70908 cri.go:89] found id: ""
	I0311 21:38:29.912895   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.912904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:29.912910   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:29.912973   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:29.953458   70908 cri.go:89] found id: ""
	I0311 21:38:29.953483   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.953491   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:29.953497   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:29.953553   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:29.997873   70908 cri.go:89] found id: ""
	I0311 21:38:29.997904   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.997912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:29.997921   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:29.997983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:30.038831   70908 cri.go:89] found id: ""
	I0311 21:38:30.038861   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.038872   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:30.038880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:30.038940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:30.082089   70908 cri.go:89] found id: ""
	I0311 21:38:30.082117   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.082127   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:30.082135   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:30.082213   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:30.121167   70908 cri.go:89] found id: ""
	I0311 21:38:30.121198   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.121209   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:30.121216   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:30.121274   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:30.162342   70908 cri.go:89] found id: ""
	I0311 21:38:30.162371   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.162380   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:30.162393   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:30.162452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:30.201727   70908 cri.go:89] found id: ""
	I0311 21:38:30.201753   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.201761   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:30.201769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:30.201780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:30.283314   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:30.283346   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:30.333900   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:30.333930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:30.391761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:30.391798   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:30.407907   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:30.407930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:30.489560   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:30.646773   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.649048   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:31.456251   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:33.951315   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.529160   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:34.530183   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.989976   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:33.004724   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:33.004814   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:33.049701   70908 cri.go:89] found id: ""
	I0311 21:38:33.049733   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.049743   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:33.049753   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:33.049823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:33.097759   70908 cri.go:89] found id: ""
	I0311 21:38:33.097792   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.097804   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:33.097811   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:33.097875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:33.143257   70908 cri.go:89] found id: ""
	I0311 21:38:33.143291   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.143300   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:33.143308   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:33.143376   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:33.187434   70908 cri.go:89] found id: ""
	I0311 21:38:33.187464   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.187477   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:33.187483   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:33.187558   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:33.236201   70908 cri.go:89] found id: ""
	I0311 21:38:33.236230   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.236239   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:33.236245   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:33.236312   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:33.279710   70908 cri.go:89] found id: ""
	I0311 21:38:33.279783   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.279816   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:33.279830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:33.279898   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:33.325022   70908 cri.go:89] found id: ""
	I0311 21:38:33.325053   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.325064   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:33.325072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:33.325138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:33.368588   70908 cri.go:89] found id: ""
	I0311 21:38:33.368614   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.368622   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:33.368629   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:33.368640   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:33.427761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:33.427801   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:33.444440   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:33.444472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:33.527745   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:33.527764   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:33.527775   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:33.608215   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:33.608248   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:35.146541   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:37.146917   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.450175   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:38.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.531125   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:39.028780   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.158253   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:36.172370   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:36.172438   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:36.216905   70908 cri.go:89] found id: ""
	I0311 21:38:36.216935   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.216945   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:36.216951   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:36.216996   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:36.260844   70908 cri.go:89] found id: ""
	I0311 21:38:36.260875   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.260885   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:36.260890   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:36.260941   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:36.306730   70908 cri.go:89] found id: ""
	I0311 21:38:36.306755   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.306767   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:36.306772   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:36.306820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:36.346957   70908 cri.go:89] found id: ""
	I0311 21:38:36.346993   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.347004   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:36.347012   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:36.347082   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:36.392265   70908 cri.go:89] found id: ""
	I0311 21:38:36.392295   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.392306   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:36.392313   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:36.392379   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:36.433383   70908 cri.go:89] found id: ""
	I0311 21:38:36.433407   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.433414   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:36.433421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:36.433467   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:36.471291   70908 cri.go:89] found id: ""
	I0311 21:38:36.471325   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.471336   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:36.471344   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:36.471411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:36.514662   70908 cri.go:89] found id: ""
	I0311 21:38:36.514688   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.514698   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:36.514708   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:36.514722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:36.533222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:36.533251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:36.616359   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:36.616384   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:36.616400   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:36.719105   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:36.719137   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:36.771125   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:36.771156   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.324847   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:39.341149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:39.341218   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:39.380284   70908 cri.go:89] found id: ""
	I0311 21:38:39.380324   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.380335   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:39.380343   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:39.380407   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:39.429860   70908 cri.go:89] found id: ""
	I0311 21:38:39.429886   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.429894   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:39.429899   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:39.429960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:39.468089   70908 cri.go:89] found id: ""
	I0311 21:38:39.468113   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.468121   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:39.468127   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:39.468188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:39.508589   70908 cri.go:89] found id: ""
	I0311 21:38:39.508617   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.508628   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:39.508636   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:39.508695   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:39.552427   70908 cri.go:89] found id: ""
	I0311 21:38:39.552451   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.552459   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:39.552464   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:39.552511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:39.592586   70908 cri.go:89] found id: ""
	I0311 21:38:39.592607   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.592615   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:39.592621   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:39.592670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:39.637138   70908 cri.go:89] found id: ""
	I0311 21:38:39.637167   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.637178   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:39.637186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:39.637248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:39.679422   70908 cri.go:89] found id: ""
	I0311 21:38:39.679457   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.679470   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:39.679482   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:39.679499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.734815   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:39.734850   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:39.750448   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:39.750472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:39.832912   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:39.832936   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:39.832951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:39.924020   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:39.924061   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:39.648759   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.146226   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:40.950021   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.951344   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:41.528407   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529130   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529166   70458 pod_ready.go:81] duration metric: took 4m0.007627735s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:43.529179   70458 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0311 21:38:43.529188   70458 pod_ready.go:38] duration metric: took 4m4.551429192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:43.529207   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:38:43.529242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:43.529306   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:43.589292   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:43.589314   70458 cri.go:89] found id: ""
	I0311 21:38:43.589323   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:43.589388   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.595182   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:43.595267   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:43.645002   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:43.645027   70458 cri.go:89] found id: ""
	I0311 21:38:43.645036   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:43.645088   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.650463   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:43.650537   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:43.693876   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:43.693894   70458 cri.go:89] found id: ""
	I0311 21:38:43.693902   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:43.693958   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.699273   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:43.699340   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:43.752552   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:43.752585   70458 cri.go:89] found id: ""
	I0311 21:38:43.752596   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:43.752667   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.758307   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:43.758384   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:43.802761   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:43.802789   70458 cri.go:89] found id: ""
	I0311 21:38:43.802798   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:43.802858   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.807796   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:43.807867   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:43.853820   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:43.853843   70458 cri.go:89] found id: ""
	I0311 21:38:43.853851   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:43.853907   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.859377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:43.859451   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:43.910605   70458 cri.go:89] found id: ""
	I0311 21:38:43.910640   70458 logs.go:276] 0 containers: []
	W0311 21:38:43.910648   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:43.910655   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:43.910702   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:43.955602   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:43.955624   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:43.955629   70458 cri.go:89] found id: ""
	I0311 21:38:43.955645   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:43.955713   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.960856   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.965889   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:43.965919   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:44.013879   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:44.013908   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:44.064641   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:44.064669   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:44.118095   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:44.118120   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:44.177775   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:44.177819   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.242090   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:44.242129   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:44.261628   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:44.261665   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:44.322616   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:44.322656   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:44.388117   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:44.388159   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:44.445980   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:44.446018   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:44.980199   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:44.980243   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:45.138312   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:45.138368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:45.208626   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:45.208664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:42.472932   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:42.488034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:42.488090   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:42.530945   70908 cri.go:89] found id: ""
	I0311 21:38:42.530971   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.530981   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:42.530989   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:42.531053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:42.571906   70908 cri.go:89] found id: ""
	I0311 21:38:42.571939   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.571951   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:42.571960   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:42.572029   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:42.613198   70908 cri.go:89] found id: ""
	I0311 21:38:42.613228   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.613239   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:42.613247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:42.613330   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:42.654740   70908 cri.go:89] found id: ""
	I0311 21:38:42.654762   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.654770   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:42.654775   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:42.654821   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:42.694797   70908 cri.go:89] found id: ""
	I0311 21:38:42.694836   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.694847   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:42.694854   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:42.694931   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:42.738918   70908 cri.go:89] found id: ""
	I0311 21:38:42.738946   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.738958   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:42.738965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:42.739032   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:42.780836   70908 cri.go:89] found id: ""
	I0311 21:38:42.780870   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.780881   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:42.780888   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:42.780943   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:42.824672   70908 cri.go:89] found id: ""
	I0311 21:38:42.824701   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.824712   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:42.824721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:42.824747   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:42.877219   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:42.877253   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:42.934996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:42.935033   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:42.952125   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:42.952152   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:43.036657   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:43.036678   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:43.036695   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:45.629959   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:45.648501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:45.648581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:45.690083   70908 cri.go:89] found id: ""
	I0311 21:38:45.690117   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.690128   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:45.690136   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:45.690201   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:45.736497   70908 cri.go:89] found id: ""
	I0311 21:38:45.736519   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.736526   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:45.736531   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:45.736576   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:45.778590   70908 cri.go:89] found id: ""
	I0311 21:38:45.778625   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.778636   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:45.778645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:45.778723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:45.822322   70908 cri.go:89] found id: ""
	I0311 21:38:45.822351   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.822359   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:45.822365   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:45.822419   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:45.868591   70908 cri.go:89] found id: ""
	I0311 21:38:45.868618   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.868627   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:45.868633   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:45.868680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:45.915137   70908 cri.go:89] found id: ""
	I0311 21:38:45.915165   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.915178   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:45.915187   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:45.915258   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:45.960432   70908 cri.go:89] found id: ""
	I0311 21:38:45.960459   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.960469   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:45.960476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:45.960529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:46.006089   70908 cri.go:89] found id: ""
	I0311 21:38:46.006168   70908 logs.go:276] 0 containers: []
	W0311 21:38:46.006185   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:46.006195   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:46.006209   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.153091   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.650654   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:44.951550   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.952791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:47.756629   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:47.776613   70458 api_server.go:72] duration metric: took 4m14.182101385s to wait for apiserver process to appear ...
	I0311 21:38:47.776651   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:38:47.776691   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:47.776774   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:47.826534   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:47.826553   70458 cri.go:89] found id: ""
	I0311 21:38:47.826560   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:47.826609   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.831565   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:47.831637   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:47.876504   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:47.876531   70458 cri.go:89] found id: ""
	I0311 21:38:47.876541   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:47.876598   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.882130   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:47.882224   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:47.930064   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:47.930087   70458 cri.go:89] found id: ""
	I0311 21:38:47.930096   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:47.930139   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.935357   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:47.935433   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:47.989169   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:47.989196   70458 cri.go:89] found id: ""
	I0311 21:38:47.989206   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:47.989262   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.994341   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:47.994401   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:48.037592   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:48.037619   70458 cri.go:89] found id: ""
	I0311 21:38:48.037629   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:48.037692   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.043377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:48.043453   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:48.088629   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.088651   70458 cri.go:89] found id: ""
	I0311 21:38:48.088671   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:48.088722   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.093944   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:48.094016   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:48.144943   70458 cri.go:89] found id: ""
	I0311 21:38:48.144971   70458 logs.go:276] 0 containers: []
	W0311 21:38:48.144983   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:48.144990   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:48.145050   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:48.188857   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:48.188877   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.188881   70458 cri.go:89] found id: ""
	I0311 21:38:48.188887   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:48.188934   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.195123   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.200643   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:48.200673   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.246864   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:48.246894   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:48.715510   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:48.715545   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:48.775676   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:48.775716   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:48.793121   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:48.793157   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:48.863992   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:48.864040   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:48.922775   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:48.922810   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.996820   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:48.996866   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:49.045065   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.045097   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:49.199072   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:49.199137   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:49.283329   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:49.283360   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:49.340461   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:49.340502   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:49.391436   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.391460   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:46.064257   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:46.064296   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:46.080304   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:46.080337   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:46.177978   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:46.178001   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:46.178017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:46.265260   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:46.265298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:48.814221   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:48.835695   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:48.835793   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:48.898391   70908 cri.go:89] found id: ""
	I0311 21:38:48.898418   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.898429   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:48.898437   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:48.898501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:48.972552   70908 cri.go:89] found id: ""
	I0311 21:38:48.972596   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.972607   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:48.972617   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:48.972684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:49.022346   70908 cri.go:89] found id: ""
	I0311 21:38:49.022371   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.022379   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:49.022384   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:49.022430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:49.078415   70908 cri.go:89] found id: ""
	I0311 21:38:49.078444   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.078455   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:49.078463   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:49.078526   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:49.119369   70908 cri.go:89] found id: ""
	I0311 21:38:49.119402   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.119412   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:49.119420   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:49.119497   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:49.169866   70908 cri.go:89] found id: ""
	I0311 21:38:49.169897   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.169908   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:49.169916   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:49.169978   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:49.223619   70908 cri.go:89] found id: ""
	I0311 21:38:49.223642   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.223650   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:49.223656   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:49.223704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:49.278499   70908 cri.go:89] found id: ""
	I0311 21:38:49.278531   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.278542   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:49.278551   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:49.278563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:49.294734   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.294760   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:49.390223   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:49.390252   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:49.390267   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:49.481214   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.481250   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:49.530285   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:49.530321   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:49.149825   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.648269   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.140832   70604 pod_ready.go:81] duration metric: took 4m0.000856291s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:53.140873   70604 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:38:53.140895   70604 pod_ready.go:38] duration metric: took 4m13.032115697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:53.140925   70604 kubeadm.go:591] duration metric: took 4m21.406945055s to restartPrimaryControlPlane
	W0311 21:38:53.140993   70604 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:53.141028   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:49.450738   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.950491   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.952209   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.955522   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:38:51.961814   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:38:51.963188   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:38:51.963209   70458 api_server.go:131] duration metric: took 4.186550701s to wait for apiserver health ...
	I0311 21:38:51.963218   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:38:51.963242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:51.963294   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.020708   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.020727   70458 cri.go:89] found id: ""
	I0311 21:38:52.020746   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:52.020815   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.026606   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.026668   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.072045   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.072063   70458 cri.go:89] found id: ""
	I0311 21:38:52.072071   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:52.072130   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.078592   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.078771   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.139445   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:52.139480   70458 cri.go:89] found id: ""
	I0311 21:38:52.139490   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:52.139548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.148641   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.148724   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.199332   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.199360   70458 cri.go:89] found id: ""
	I0311 21:38:52.199371   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:52.199433   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.207033   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.207096   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.267514   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.267540   70458 cri.go:89] found id: ""
	I0311 21:38:52.267549   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:52.267615   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.274048   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.274132   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.330293   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:52.330324   70458 cri.go:89] found id: ""
	I0311 21:38:52.330334   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:52.330395   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.336062   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.336143   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.381909   70458 cri.go:89] found id: ""
	I0311 21:38:52.381941   70458 logs.go:276] 0 containers: []
	W0311 21:38:52.381952   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.381960   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:52.382026   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:52.441879   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.441908   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.441919   70458 cri.go:89] found id: ""
	I0311 21:38:52.441928   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:52.441988   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.449288   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.456632   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.456664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.526327   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.526368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.545008   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.545035   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:52.699959   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:52.699995   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.762045   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:52.762079   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.828963   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:52.829005   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.874202   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:52.874237   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.916842   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:52.916872   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.969778   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.969807   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:53.365097   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:53.365147   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:53.446533   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:53.446576   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:53.500017   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:53.500043   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:53.572904   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:53.572954   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.087848   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:52.108284   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:52.108351   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.161648   70908 cri.go:89] found id: ""
	I0311 21:38:52.161680   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.161691   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:52.161698   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.161763   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.206552   70908 cri.go:89] found id: ""
	I0311 21:38:52.206577   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.206588   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:52.206596   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.206659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.253954   70908 cri.go:89] found id: ""
	I0311 21:38:52.253984   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.253996   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:52.254004   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.254068   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.302343   70908 cri.go:89] found id: ""
	I0311 21:38:52.302384   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.302396   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:52.302404   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.302472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.345581   70908 cri.go:89] found id: ""
	I0311 21:38:52.345608   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.345618   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:52.345624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.345683   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.392502   70908 cri.go:89] found id: ""
	I0311 21:38:52.392531   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.392542   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:52.392549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.392601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.447625   70908 cri.go:89] found id: ""
	I0311 21:38:52.447651   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.447661   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.447668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:52.447728   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:52.490965   70908 cri.go:89] found id: ""
	I0311 21:38:52.490994   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.491007   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:52.491019   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:52.491034   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:52.539604   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.539650   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.597735   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.597771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.617572   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.617610   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:52.706724   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:52.706753   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.706769   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.293550   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:55.313904   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:55.314005   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:55.368607   70908 cri.go:89] found id: ""
	I0311 21:38:55.368639   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.368647   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:55.368654   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:55.368714   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:55.434052   70908 cri.go:89] found id: ""
	I0311 21:38:55.434081   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.434092   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:55.434100   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:55.434189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:55.483532   70908 cri.go:89] found id: ""
	I0311 21:38:55.483562   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.483572   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:55.483579   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:55.483647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:55.528681   70908 cri.go:89] found id: ""
	I0311 21:38:55.528708   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.528721   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:55.528728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:55.528825   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:55.583143   70908 cri.go:89] found id: ""
	I0311 21:38:55.583167   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.583174   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:55.583179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:55.583240   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:55.636577   70908 cri.go:89] found id: ""
	I0311 21:38:55.636599   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.636607   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:55.636612   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:55.636670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:55.697268   70908 cri.go:89] found id: ""
	I0311 21:38:55.697295   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.697306   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:55.697314   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:55.697374   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:55.749272   70908 cri.go:89] found id: ""
	I0311 21:38:55.749302   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.749312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:55.749322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:55.749335   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.841581   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:55.841643   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:55.898537   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:55.898574   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:55.973278   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:55.973329   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:55.992958   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:55.992986   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:56.137313   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:38:56.137347   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.137354   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.137361   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.137366   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.137371   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.137375   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.137383   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.137390   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.137400   70458 system_pods.go:74] duration metric: took 4.174175629s to wait for pod list to return data ...
	I0311 21:38:56.137409   70458 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:38:56.140315   70458 default_sa.go:45] found service account: "default"
	I0311 21:38:56.140344   70458 default_sa.go:55] duration metric: took 2.92722ms for default service account to be created ...
	I0311 21:38:56.140356   70458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:38:56.146873   70458 system_pods.go:86] 8 kube-system pods found
	I0311 21:38:56.146912   70458 system_pods.go:89] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.146923   70458 system_pods.go:89] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.146932   70458 system_pods.go:89] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.146940   70458 system_pods.go:89] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.146945   70458 system_pods.go:89] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.146951   70458 system_pods.go:89] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.146960   70458 system_pods.go:89] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.146972   70458 system_pods.go:89] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.146983   70458 system_pods.go:126] duration metric: took 6.619737ms to wait for k8s-apps to be running ...
	I0311 21:38:56.146998   70458 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:38:56.147056   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:56.165354   70458 system_svc.go:56] duration metric: took 18.346754ms WaitForService to wait for kubelet
	I0311 21:38:56.165387   70458 kubeadm.go:576] duration metric: took 4m22.570894549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:38:56.165413   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:38:56.168819   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:38:56.168845   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:38:56.168856   70458 node_conditions.go:105] duration metric: took 3.437527ms to run NodePressure ...
	I0311 21:38:56.168868   70458 start.go:240] waiting for startup goroutines ...
	I0311 21:38:56.168875   70458 start.go:245] waiting for cluster config update ...
	I0311 21:38:56.168885   70458 start.go:254] writing updated cluster config ...
	I0311 21:38:56.169153   70458 ssh_runner.go:195] Run: rm -f paused
	I0311 21:38:56.225977   70458 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:38:56.228234   70458 out.go:177] * Done! kubectl is now configured to use "no-preload-324578" cluster and "default" namespace by default
	I0311 21:38:56.450729   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:58.450799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	W0311 21:38:56.084193   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:58.584354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:58.604767   70908 kubeadm.go:591] duration metric: took 4m4.440744932s to restartPrimaryControlPlane
	W0311 21:38:58.604844   70908 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:58.604872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:59.965834   70908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.36094005s)
	I0311 21:38:59.965906   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:59.982020   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:38:59.994794   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:00.007116   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:00.007138   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:00.007182   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:00.019744   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:00.019802   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:00.033311   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:00.045608   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:00.045685   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:00.059722   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.071140   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:00.071199   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.082635   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:00.093311   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:00.093374   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:00.104995   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:00.372164   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:00.950799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:03.450080   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:05.949899   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:07.950640   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:10.450583   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:12.949481   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:14.950496   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:16.951064   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:18.958165   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:21.450609   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:23.949791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:26.302837   70604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.161781704s)
	I0311 21:39:26.302921   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:26.319602   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:39:26.331483   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:26.343632   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:26.343658   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:26.343705   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:26.354863   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:26.354919   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:26.366087   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:26.377221   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:26.377282   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:26.389769   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.401201   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:26.401255   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.412357   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:26.423962   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:26.424035   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:26.436189   70604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:26.672030   70604 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:25.952857   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:28.449272   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:30.450630   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:32.450912   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:35.908605   70604 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:39:35.908656   70604 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:39:35.908751   70604 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:39:35.908846   70604 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:39:35.908967   70604 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:39:35.909026   70604 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:39:35.910690   70604 out.go:204]   - Generating certificates and keys ...
	I0311 21:39:35.910785   70604 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:39:35.910849   70604 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:39:35.910952   70604 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:39:35.911039   70604 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:39:35.911106   70604 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:39:35.911177   70604 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:39:35.911268   70604 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:39:35.911353   70604 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:39:35.911449   70604 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:39:35.911551   70604 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:39:35.911604   70604 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:39:35.911689   70604 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:39:35.911762   70604 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:39:35.911869   70604 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:39:35.911974   70604 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:39:35.912067   70604 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:39:35.912217   70604 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:39:35.912320   70604 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:39:35.914908   70604 out.go:204]   - Booting up control plane ...
	I0311 21:39:35.915026   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:39:35.915126   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:39:35.915216   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:39:35.915321   70604 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:39:35.915431   70604 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:39:35.915487   70604 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:39:35.915659   70604 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:39:35.915792   70604 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503325 seconds
	I0311 21:39:35.915925   70604 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:39:35.916039   70604 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:39:35.916091   70604 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:39:35.916314   70604 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-743937 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:39:35.916408   70604 kubeadm.go:309] [bootstrap-token] Using token: hxeoeg.f2scq51qa57vwzwt
	I0311 21:39:35.917880   70604 out.go:204]   - Configuring RBAC rules ...
	I0311 21:39:35.917995   70604 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:39:35.918093   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:39:35.918297   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:39:35.918490   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:39:35.918629   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:39:35.918745   70604 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:39:35.918907   70604 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:39:35.918974   70604 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:39:35.919031   70604 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:39:35.919048   70604 kubeadm.go:309] 
	I0311 21:39:35.919118   70604 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:39:35.919128   70604 kubeadm.go:309] 
	I0311 21:39:35.919225   70604 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:39:35.919236   70604 kubeadm.go:309] 
	I0311 21:39:35.919266   70604 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:39:35.919344   70604 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:39:35.919405   70604 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:39:35.919412   70604 kubeadm.go:309] 
	I0311 21:39:35.919461   70604 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:39:35.919467   70604 kubeadm.go:309] 
	I0311 21:39:35.919505   70604 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:39:35.919511   70604 kubeadm.go:309] 
	I0311 21:39:35.919553   70604 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:39:35.919640   70604 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:39:35.919727   70604 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:39:35.919736   70604 kubeadm.go:309] 
	I0311 21:39:35.919835   70604 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:39:35.919949   70604 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:39:35.919964   70604 kubeadm.go:309] 
	I0311 21:39:35.920071   70604 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920172   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:39:35.920193   70604 kubeadm.go:309] 	--control-plane 
	I0311 21:39:35.920199   70604 kubeadm.go:309] 
	I0311 21:39:35.920271   70604 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:39:35.920280   70604 kubeadm.go:309] 
	I0311 21:39:35.920349   70604 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920479   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:39:35.920507   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:39:35.920517   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:39:35.922125   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:39:35.923386   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:39:35.955828   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:39:36.065309   70604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:39:36.065389   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.065408   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-743937 minikube.k8s.io/updated_at=2024_03_11T21_39_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=embed-certs-743937 minikube.k8s.io/primary=true
	I0311 21:39:36.370945   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.370961   70604 ops.go:34] apiserver oom_adj: -16
	I0311 21:39:36.871194   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.371937   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.871974   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.371330   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.871791   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:34.949300   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:36.942990   70417 pod_ready.go:81] duration metric: took 4m0.000574155s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	E0311 21:39:36.943022   70417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:39:36.943043   70417 pod_ready.go:38] duration metric: took 4m12.043798271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:36.943093   70417 kubeadm.go:591] duration metric: took 4m20.121624644s to restartPrimaryControlPlane
	W0311 21:39:36.943155   70417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:39:36.943183   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:39:39.371531   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:39.872032   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.371717   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.871615   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.371577   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.871841   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.371050   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.871044   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.371446   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.871815   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.371243   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.872056   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.371993   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.871213   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.371397   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.871185   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.371541   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.871121   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.971855   70604 kubeadm.go:1106] duration metric: took 11.906533451s to wait for elevateKubeSystemPrivileges
	W0311 21:39:47.971895   70604 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:39:47.971902   70604 kubeadm.go:393] duration metric: took 5m16.305518086s to StartCluster
	I0311 21:39:47.971917   70604 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.972003   70604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:39:47.974339   70604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.974576   70604 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:39:47.976309   70604 out.go:177] * Verifying Kubernetes components...
	I0311 21:39:47.974638   70604 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:39:47.974819   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:39:47.977737   70604 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-743937"
	I0311 21:39:47.977746   70604 addons.go:69] Setting default-storageclass=true in profile "embed-certs-743937"
	I0311 21:39:47.977779   70604 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-743937"
	W0311 21:39:47.977790   70604 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:39:47.977815   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.977740   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:39:47.977779   70604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-743937"
	I0311 21:39:47.977750   70604 addons.go:69] Setting metrics-server=true in profile "embed-certs-743937"
	I0311 21:39:47.977943   70604 addons.go:234] Setting addon metrics-server=true in "embed-certs-743937"
	W0311 21:39:47.977957   70604 addons.go:243] addon metrics-server should already be in state true
	I0311 21:39:47.977985   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978270   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978275   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978419   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978449   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.994019   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I0311 21:39:47.994131   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0311 21:39:47.994484   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994514   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994964   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.994983   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995128   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.995143   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995288   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0311 21:39:47.995437   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995506   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995583   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:47.996051   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.996073   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.996516   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.996999   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.997024   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.997383   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.997834   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.997858   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.999381   70604 addons.go:234] Setting addon default-storageclass=true in "embed-certs-743937"
	W0311 21:39:47.999406   70604 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:39:47.999432   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.999794   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.999823   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.012063   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0311 21:39:48.012470   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.012899   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.012923   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.013267   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.013334   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0311 21:39:48.013484   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.013767   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.014259   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.014279   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.014556   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.014752   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.015486   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.017650   70604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:39:48.016591   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.019717   70604 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.019736   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:39:48.019758   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.021823   70604 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:39:48.023083   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:39:48.023095   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:39:48.023108   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.023306   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.023589   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0311 21:39:48.023916   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.023937   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.024255   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.024412   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.024533   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.024653   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.025517   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.025955   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.025967   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.026292   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.027365   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.027654   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:48.027692   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.027909   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.027965   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.028188   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.028369   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.028496   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.028603   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.048933   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0311 21:39:48.049338   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.049918   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.049929   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.050342   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.050502   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.052274   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.052523   70604 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.052537   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:39:48.052554   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.055438   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.055864   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.055881   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.056156   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.056334   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.056495   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.056608   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.175402   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:39:48.196199   70604 node_ready.go:35] waiting up to 6m0s for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215911   70604 node_ready.go:49] node "embed-certs-743937" has status "Ready":"True"
	I0311 21:39:48.215935   70604 node_ready.go:38] duration metric: took 19.701474ms for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215945   70604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.223525   70604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228887   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.228907   70604 pod_ready.go:81] duration metric: took 5.35597ms for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228917   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233811   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.233828   70604 pod_ready.go:81] duration metric: took 4.904721ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233839   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241831   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.241848   70604 pod_ready.go:81] duration metric: took 8.002663ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241857   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247609   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.247633   70604 pod_ready.go:81] duration metric: took 5.767693ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247641   70604 pod_ready.go:38] duration metric: took 31.680305ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.247656   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:39:48.247704   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:39:48.270201   70604 api_server.go:72] duration metric: took 295.596568ms to wait for apiserver process to appear ...
	I0311 21:39:48.270224   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:39:48.270242   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:39:48.277642   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:39:48.280487   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:39:48.280505   70604 api_server.go:131] duration metric: took 10.273204ms to wait for apiserver health ...
	I0311 21:39:48.280514   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:39:48.343718   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.346848   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:39:48.346864   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:39:48.400878   70604 system_pods.go:59] 4 kube-system pods found
	I0311 21:39:48.400907   70604 system_pods.go:61] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.400913   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.400919   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.400923   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.400931   70604 system_pods.go:74] duration metric: took 120.410888ms to wait for pod list to return data ...
	I0311 21:39:48.400940   70604 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:39:48.401062   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:39:48.401083   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:39:48.406115   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.492018   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.492042   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:39:48.581187   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.602016   70604 default_sa.go:45] found service account: "default"
	I0311 21:39:48.602046   70604 default_sa.go:55] duration metric: took 201.097662ms for default service account to be created ...
	I0311 21:39:48.602056   70604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:39:48.862115   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:48.862148   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending
	I0311 21:39:48.862155   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending
	I0311 21:39:48.862159   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.862164   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.862169   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.862176   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:48.862180   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.862199   70604 retry.go:31] will retry after 266.08114ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.139648   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.139675   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139682   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139689   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.139694   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.139700   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.139706   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.139710   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.139724   70604 retry.go:31] will retry after 293.420416ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.476384   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.476411   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476418   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476423   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.476429   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.476433   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.476438   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.476442   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.476456   70604 retry.go:31] will retry after 439.10065ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.927298   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.927337   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927348   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927357   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.927366   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.927373   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.927381   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.927389   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.927411   70604 retry.go:31] will retry after 396.604462ms: missing components: kube-dns, kube-proxy
	I0311 21:39:50.092631   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68647s)
	I0311 21:39:50.092698   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.092718   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093147   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093200   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093223   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093241   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093280   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.749522465s)
	I0311 21:39:50.093321   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093336   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093507   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093529   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093746   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.093759   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093773   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093797   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093805   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.094040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.094041   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.094067   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.111807   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.111831   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.112109   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.112127   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.112132   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.291598   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.710367476s)
	I0311 21:39:50.291651   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.291671   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292020   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292036   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292044   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.292050   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292287   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.292328   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292352   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292367   70604 addons.go:470] Verifying addon metrics-server=true in "embed-certs-743937"
	I0311 21:39:50.294192   70604 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:39:50.295405   70604 addons.go:505] duration metric: took 2.320766016s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:39:50.339623   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:50.339651   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339658   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339665   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:50.339671   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:50.339677   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:50.339682   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:50.339688   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:50.339695   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:50.339704   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:50.339728   70604 retry.go:31] will retry after 674.573171ms: missing components: kube-dns
	I0311 21:39:51.021666   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.021704   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021716   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021723   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.021731   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.021743   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.021754   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.021760   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.021772   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.021786   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:51.021805   70604 retry.go:31] will retry after 716.470399ms: missing components: kube-dns
	I0311 21:39:51.745786   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.745818   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745829   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745840   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.745849   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.745855   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.745861   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.745867   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.745876   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.745886   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:51.745904   70604 retry.go:31] will retry after 873.920018ms: missing components: kube-dns
	I0311 21:39:52.627896   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:52.627922   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Running
	I0311 21:39:52.627927   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Running
	I0311 21:39:52.627932   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:52.627936   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:52.627941   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:52.627944   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:52.627948   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:52.627954   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:52.627958   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:52.627966   70604 system_pods.go:126] duration metric: took 4.025903884s to wait for k8s-apps to be running ...
	I0311 21:39:52.627976   70604 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:39:52.628017   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:52.643356   70604 system_svc.go:56] duration metric: took 15.371853ms WaitForService to wait for kubelet
	I0311 21:39:52.643378   70604 kubeadm.go:576] duration metric: took 4.668777182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:39:52.643394   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:39:52.646844   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:39:52.646862   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:39:52.646871   70604 node_conditions.go:105] duration metric: took 3.47245ms to run NodePressure ...
	I0311 21:39:52.646881   70604 start.go:240] waiting for startup goroutines ...
	I0311 21:39:52.646891   70604 start.go:245] waiting for cluster config update ...
	I0311 21:39:52.646904   70604 start.go:254] writing updated cluster config ...
	I0311 21:39:52.647207   70604 ssh_runner.go:195] Run: rm -f paused
	I0311 21:39:52.697687   70604 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:39:52.699641   70604 out.go:177] * Done! kubectl is now configured to use "embed-certs-743937" cluster and "default" namespace by default
	I0311 21:40:09.411155   70417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.467938624s)
	I0311 21:40:09.411245   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:09.429951   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:40:09.442265   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:09.453883   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:09.453899   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:09.453934   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:40:09.465106   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:09.465161   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:09.476155   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:40:09.487366   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:09.487413   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:09.497877   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.508056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:09.508096   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.518709   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:40:09.529005   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:09.529039   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:09.539755   70417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:09.601265   70417 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:40:09.601399   70417 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:09.771387   70417 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:09.771548   70417 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:09.771653   70417 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:10.016610   70417 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:10.018526   70417 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:10.018613   70417 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:10.018670   70417 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:10.018752   70417 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:10.018830   70417 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:10.018926   70417 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:10.019019   70417 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:10.019436   70417 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:10.019924   70417 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:10.020435   70417 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:10.020949   70417 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:10.021470   70417 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:10.021550   70417 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:10.087827   70417 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:10.326702   70417 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:10.515476   70417 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:10.585573   70417 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:10.586277   70417 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:10.588784   70417 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:10.590786   70417 out.go:204]   - Booting up control plane ...
	I0311 21:40:10.590969   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:10.591080   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:10.591164   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:10.613086   70417 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:10.613187   70417 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:10.613224   70417 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:10.753737   70417 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:17.258016   70417 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503151 seconds
	I0311 21:40:17.258170   70417 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:40:17.276142   70417 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:40:17.805116   70417 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:40:17.805383   70417 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-766430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:40:18.323836   70417 kubeadm.go:309] [bootstrap-token] Using token: 9sjslg.sf5b1bfk3wp77z35
	I0311 21:40:18.325382   70417 out.go:204]   - Configuring RBAC rules ...
	I0311 21:40:18.325478   70417 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:40:18.331585   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:40:18.344341   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:40:18.348362   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:40:18.352181   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:40:18.363299   70417 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:40:18.377835   70417 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:40:18.612013   70417 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:40:18.755215   70417 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:40:18.755235   70417 kubeadm.go:309] 
	I0311 21:40:18.755300   70417 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:40:18.755314   70417 kubeadm.go:309] 
	I0311 21:40:18.755434   70417 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:40:18.755460   70417 kubeadm.go:309] 
	I0311 21:40:18.755490   70417 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:40:18.755571   70417 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:40:18.755636   70417 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:40:18.755647   70417 kubeadm.go:309] 
	I0311 21:40:18.755721   70417 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:40:18.755731   70417 kubeadm.go:309] 
	I0311 21:40:18.755794   70417 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:40:18.755804   70417 kubeadm.go:309] 
	I0311 21:40:18.755876   70417 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:40:18.755941   70417 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:40:18.756010   70417 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:40:18.756029   70417 kubeadm.go:309] 
	I0311 21:40:18.756152   70417 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:40:18.756267   70417 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:40:18.756277   70417 kubeadm.go:309] 
	I0311 21:40:18.756391   70417 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.756533   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:40:18.756578   70417 kubeadm.go:309] 	--control-plane 
	I0311 21:40:18.756585   70417 kubeadm.go:309] 
	I0311 21:40:18.756695   70417 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:40:18.756706   70417 kubeadm.go:309] 
	I0311 21:40:18.756844   70417 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.757021   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:40:18.759444   70417 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:40:18.759474   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:40:18.759489   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:40:18.761354   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:40:18.762676   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:40:18.793496   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:40:18.840426   70417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-766430 minikube.k8s.io/updated_at=2024_03_11T21_40_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=default-k8s-diff-port-766430 minikube.k8s.io/primary=true
	I0311 21:40:19.150012   70417 ops.go:34] apiserver oom_adj: -16
	I0311 21:40:19.150129   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:19.650947   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.150969   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.650687   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.150849   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.650356   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.150737   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.650225   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.150390   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.650650   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.151081   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.650689   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.150428   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.650265   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.150198   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.650610   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.150325   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.650794   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.150855   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.650819   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.150345   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.650746   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.150910   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.650742   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.790472   70417 kubeadm.go:1106] duration metric: took 11.95003413s to wait for elevateKubeSystemPrivileges
	W0311 21:40:30.790506   70417 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:40:30.790513   70417 kubeadm.go:393] duration metric: took 5m14.024392605s to StartCluster
	I0311 21:40:30.790527   70417 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.790630   70417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:40:30.792582   70417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.792843   70417 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:40:30.794425   70417 out.go:177] * Verifying Kubernetes components...
	I0311 21:40:30.792920   70417 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:40:30.793051   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:40:30.796119   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:40:30.796129   70417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796160   70417 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796171   70417 addons.go:243] addon metrics-server should already be in state true
	I0311 21:40:30.796197   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796121   70417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796127   70417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796237   70417 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796253   70417 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:40:30.796268   70417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766430"
	I0311 21:40:30.796278   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796663   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796694   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796699   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796722   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796777   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796807   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.812156   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0311 21:40:30.812601   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.813108   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.813138   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.813532   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.813995   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.814031   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.816427   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0311 21:40:30.816626   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0311 21:40:30.816863   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817015   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817365   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817385   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817532   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817557   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817905   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.817908   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.818696   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.819070   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.819100   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.822839   70417 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.822858   70417 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:40:30.822885   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.823188   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.823202   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.834007   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0311 21:40:30.834521   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.835017   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.835033   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.835418   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.835620   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.837838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.839548   70417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:40:30.838397   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0311 21:40:30.840244   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0311 21:40:30.840869   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:40:30.840885   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:40:30.840904   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.841295   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841345   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841877   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.841894   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.841994   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.842012   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.842246   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.842448   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842960   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.842985   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.844184   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.844406   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.845769   70417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:40:30.847105   70417 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:30.844838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.847124   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:40:30.847142   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.845110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.847151   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.847302   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.847424   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.847550   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.849856   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850205   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.850232   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.850575   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.850697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.850835   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.861464   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0311 21:40:30.861799   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.862252   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.862271   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.862655   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.862818   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.864692   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.864956   70417 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:30.864978   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:40:30.864996   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.867548   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.867980   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.868013   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.868140   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.868300   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.868433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.868558   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:31.037958   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:40:31.081173   70417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103697   70417 node_ready.go:49] node "default-k8s-diff-port-766430" has status "Ready":"True"
	I0311 21:40:31.103717   70417 node_ready.go:38] duration metric: took 22.519334ms for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103726   70417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:31.129595   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:31.184749   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:40:31.184771   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:40:31.194340   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:31.213567   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:40:31.213589   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:40:31.255647   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.255667   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:40:31.284917   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.309356   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:32.792293   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.597920266s)
	I0311 21:40:32.792337   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.792625   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.792686   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.792703   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.792714   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792724   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.793060   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.793086   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.793137   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811230   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.811254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.811583   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811587   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.811606   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.156126   70417 pod_ready.go:92] pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.156148   70417 pod_ready.go:81] duration metric: took 2.026531002s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.156156   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174226   70417 pod_ready.go:92] pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.174248   70417 pod_ready.go:81] duration metric: took 18.0858ms for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174257   70417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186296   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.186329   70417 pod_ready.go:81] duration metric: took 12.06396ms for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186344   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195902   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.195930   70417 pod_ready.go:81] duration metric: took 9.577334ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195945   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203134   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.203160   70417 pod_ready.go:81] duration metric: took 7.205172ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203174   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.449290   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.164324973s)
	I0311 21:40:33.449341   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.139948099s)
	I0311 21:40:33.449374   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449392   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449346   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449461   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449662   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449678   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449688   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.449795   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449810   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449823   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449836   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449886   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449905   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450213   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450256   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.450263   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.450272   70417 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-766430"
	I0311 21:40:33.453444   70417 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0311 21:40:33.454670   70417 addons.go:505] duration metric: took 2.661756652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0311 21:40:33.534893   70417 pod_ready.go:92] pod "kube-proxy-t4fwc" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.534915   70417 pod_ready.go:81] duration metric: took 331.733613ms for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.534924   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933950   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.933973   70417 pod_ready.go:81] duration metric: took 399.042085ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933981   70417 pod_ready.go:38] duration metric: took 2.830245804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:33.933994   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:40:33.934053   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:40:33.953607   70417 api_server.go:72] duration metric: took 3.160728268s to wait for apiserver process to appear ...
	I0311 21:40:33.953629   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:40:33.953650   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:40:33.959064   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:40:33.960101   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:40:33.960125   70417 api_server.go:131] duration metric: took 6.489682ms to wait for apiserver health ...
	I0311 21:40:33.960135   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:40:34.137026   70417 system_pods.go:59] 9 kube-system pods found
	I0311 21:40:34.137061   70417 system_pods.go:61] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.137079   70417 system_pods.go:61] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.137086   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.137093   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.137100   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.137105   70417 system_pods.go:61] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.137111   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.137121   70417 system_pods.go:61] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.137133   70417 system_pods.go:61] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.137147   70417 system_pods.go:74] duration metric: took 177.004603ms to wait for pod list to return data ...
	I0311 21:40:34.137201   70417 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:40:34.333563   70417 default_sa.go:45] found service account: "default"
	I0311 21:40:34.333589   70417 default_sa.go:55] duration metric: took 196.374123ms for default service account to be created ...
	I0311 21:40:34.333600   70417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:40:34.537376   70417 system_pods.go:86] 9 kube-system pods found
	I0311 21:40:34.537401   70417 system_pods.go:89] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.537406   70417 system_pods.go:89] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.537411   70417 system_pods.go:89] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.537415   70417 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.537420   70417 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.537423   70417 system_pods.go:89] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.537427   70417 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.537433   70417 system_pods.go:89] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.537438   70417 system_pods.go:89] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.537447   70417 system_pods.go:126] duration metric: took 203.840784ms to wait for k8s-apps to be running ...
	I0311 21:40:34.537453   70417 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:40:34.537493   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:34.555483   70417 system_svc.go:56] duration metric: took 18.021595ms WaitForService to wait for kubelet
	I0311 21:40:34.555511   70417 kubeadm.go:576] duration metric: took 3.76263503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:40:34.555534   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:40:34.735214   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:40:34.735238   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:40:34.735248   70417 node_conditions.go:105] duration metric: took 179.707447ms to run NodePressure ...
	I0311 21:40:34.735258   70417 start.go:240] waiting for startup goroutines ...
	I0311 21:40:34.735264   70417 start.go:245] waiting for cluster config update ...
	I0311 21:40:34.735274   70417 start.go:254] writing updated cluster config ...
	I0311 21:40:34.735539   70417 ssh_runner.go:195] Run: rm -f paused
	I0311 21:40:34.782710   70417 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:40:34.784627   70417 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-766430" cluster and "default" namespace by default
	I0311 21:40:56.380462   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:40:56.380539   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:40:56.382217   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:56.382264   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:56.382349   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:56.382450   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:56.382619   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:56.382712   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:56.384498   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:56.384579   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:56.384636   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:56.384766   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:56.384863   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:56.384967   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:56.385037   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:56.385139   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:56.385208   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:56.385281   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:56.385357   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:56.385408   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:56.385492   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:56.385567   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:56.385644   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:56.385769   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:56.385855   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:56.385962   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:56.386053   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:56.386104   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:56.386184   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:56.387594   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:56.387671   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:56.387738   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:56.387811   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:56.387914   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:56.388107   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:56.388182   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:40:56.388297   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388522   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388614   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388844   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388914   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389074   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389131   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389314   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389405   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389594   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389603   70908 kubeadm.go:309] 
	I0311 21:40:56.389653   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:40:56.389720   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:40:56.389732   70908 kubeadm.go:309] 
	I0311 21:40:56.389779   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:40:56.389811   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:40:56.389924   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:40:56.389933   70908 kubeadm.go:309] 
	I0311 21:40:56.390058   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:40:56.390109   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:40:56.390150   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:40:56.390159   70908 kubeadm.go:309] 
	I0311 21:40:56.390299   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:40:56.390395   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:40:56.390409   70908 kubeadm.go:309] 
	I0311 21:40:56.390512   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:40:56.390603   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:40:56.390702   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:40:56.390803   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:40:56.390833   70908 kubeadm.go:309] 
	W0311 21:40:56.390936   70908 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:40:56.390995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:40:56.941058   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:56.958276   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:56.970464   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:56.970493   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:56.970552   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:40:56.983314   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:56.983372   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:56.993791   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:40:57.004040   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:57.004098   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:57.014471   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.024751   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:57.024805   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.035389   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:40:57.045511   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:57.045556   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:57.056774   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:57.140620   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:57.140789   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:57.310076   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:57.310193   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:57.310280   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:57.506834   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:57.509261   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:57.509362   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:57.509446   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:57.509576   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:57.509669   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:57.509765   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:57.509839   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:57.509949   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:57.510004   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:57.510109   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:57.510231   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:57.510274   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:57.510361   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:57.585562   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:57.644460   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:57.784382   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:57.848952   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:57.867302   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:57.867791   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:57.867864   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:58.036523   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:58.039051   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:58.039176   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:58.054234   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:58.055548   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:58.057378   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:58.060167   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:41:38.062360   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:41:38.062886   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:38.063137   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:43.063592   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:43.063788   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:53.064505   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:53.064773   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:13.065744   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:13.065995   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.066718   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:53.067030   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.067070   70908 kubeadm.go:309] 
	I0311 21:42:53.067135   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:42:53.067191   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:42:53.067203   70908 kubeadm.go:309] 
	I0311 21:42:53.067259   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:42:53.067318   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:42:53.067456   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:42:53.067466   70908 kubeadm.go:309] 
	I0311 21:42:53.067590   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:42:53.067650   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:42:53.067724   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:42:53.067735   70908 kubeadm.go:309] 
	I0311 21:42:53.067889   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:42:53.068021   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:42:53.068036   70908 kubeadm.go:309] 
	I0311 21:42:53.068169   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:42:53.068297   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:42:53.068412   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:42:53.068512   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:42:53.068523   70908 kubeadm.go:309] 
	I0311 21:42:53.069455   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:42:53.069572   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:42:53.069682   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:42:53.069775   70908 kubeadm.go:393] duration metric: took 7m58.960224884s to StartCluster
	I0311 21:42:53.069833   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:42:53.069899   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:42:53.120459   70908 cri.go:89] found id: ""
	I0311 21:42:53.120486   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.120497   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:42:53.120505   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:42:53.120564   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:42:53.159639   70908 cri.go:89] found id: ""
	I0311 21:42:53.159667   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.159676   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:42:53.159682   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:42:53.159738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:42:53.199584   70908 cri.go:89] found id: ""
	I0311 21:42:53.199607   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.199614   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:42:53.199619   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:42:53.199676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:42:53.238868   70908 cri.go:89] found id: ""
	I0311 21:42:53.238901   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.238908   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:42:53.238917   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:42:53.238963   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:42:53.282172   70908 cri.go:89] found id: ""
	I0311 21:42:53.282205   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.282216   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:42:53.282225   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:42:53.282278   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:42:53.318450   70908 cri.go:89] found id: ""
	I0311 21:42:53.318481   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.318491   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:42:53.318499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:42:53.318559   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:42:53.360887   70908 cri.go:89] found id: ""
	I0311 21:42:53.360913   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.360923   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:42:53.360930   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:42:53.361027   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:42:53.414181   70908 cri.go:89] found id: ""
	I0311 21:42:53.414209   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.414220   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:42:53.414232   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:42:53.414247   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:42:53.478658   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:42:53.478689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:42:53.494577   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:42:53.494604   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:42:53.586460   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:42:53.586483   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:42:53.586500   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:42:53.697218   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:42:53.697251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0311 21:42:53.746291   70908 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:42:53.746336   70908 out.go:239] * 
	W0311 21:42:53.746388   70908 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.746409   70908 out.go:239] * 
	W0311 21:42:53.747362   70908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:42:53.750888   70908 out.go:177] 
	W0311 21:42:53.752146   70908 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.752211   70908 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:42:53.752239   70908 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:42:53.753832   70908 out.go:177] 
	
	
	==> CRI-O <==
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.544948276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193375544918032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed44c5fd-ccbc-4f1a-ab25-8ae6aa8361dc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.545813766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c6140e2-604e-41c4-a368-0a73631f94c1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.545868056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c6140e2-604e-41c4-a368-0a73631f94c1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.545897101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0c6140e2-604e-41c4-a368-0a73631f94c1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.579443766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1b9210f-8b34-4a98-9df4-ce50731c64e8 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.579541488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1b9210f-8b34-4a98-9df4-ce50731c64e8 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.580991425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaefe6ef-244e-4842-be8d-5bc8345ecb64 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.581329182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193375581311194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaefe6ef-244e-4842-be8d-5bc8345ecb64 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.582103992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6726d792-239f-4ae3-bce9-cb53891d4b8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.582169469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6726d792-239f-4ae3-bce9-cb53891d4b8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.582217171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6726d792-239f-4ae3-bce9-cb53891d4b8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.618865739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6aceb583-0a45-47e7-94e7-638aac4b3a1a name=/runtime.v1.RuntimeService/Version
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.618933189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6aceb583-0a45-47e7-94e7-638aac4b3a1a name=/runtime.v1.RuntimeService/Version
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.620329536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff490cba-8047-449a-af6b-f2a01a687e5e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.620760364Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193375620735353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff490cba-8047-449a-af6b-f2a01a687e5e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.621257291Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=995f5b87-856b-4b07-a17d-cd3b9263ab5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.621324454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=995f5b87-856b-4b07-a17d-cd3b9263ab5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.621370176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=995f5b87-856b-4b07-a17d-cd3b9263ab5c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.655276330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db046259-564a-4f5a-9276-2da6204852b6 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.655346474Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db046259-564a-4f5a-9276-2da6204852b6 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.656438728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c66a93f-f538-4949-9c0d-8c778a24b4d2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.656877601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193375656854185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c66a93f-f538-4949-9c0d-8c778a24b4d2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.657377177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62da9a03-440a-4594-8da6-7c015103b050 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.657424695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62da9a03-440a-4594-8da6-7c015103b050 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:42:55 old-k8s-version-239315 crio[648]: time="2024-03-11 21:42:55.657454644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=62da9a03-440a-4594-8da6-7c015103b050 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar11 21:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053511] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047458] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.912778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.895538] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.801193] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.918843] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.060085] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078339] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.210226] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.161588] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.299563] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.096564] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.072356] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.134589] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Mar11 21:35] kauditd_printk_skb: 46 callbacks suppressed
	[Mar11 21:39] systemd-fstab-generator[4995]: Ignoring "noauto" option for root device
	[Mar11 21:40] systemd-fstab-generator[5275]: Ignoring "noauto" option for root device
	[  +0.073343] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:42:55 up 8 min,  0 users,  load average: 0.03, 0.15, 0.10
	Linux old-k8s-version-239315 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001b36e0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0006f5170, 0x24, 0x0, ...)
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]: net.(*Dialer).DialContext(0xc00010c240, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0006f5170, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000acbe20, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0006f5170, 0x24, 0x60, 0x7f6880539d10, 0x118, ...)
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]: net/http.(*Transport).dial(0xc00086a000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0006f5170, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]: net/http.(*Transport).dialConn(0xc00086a000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000484540, 0x5, 0xc0006f5170, 0x24, 0x0, 0xc000c98120, ...)
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]: net/http.(*Transport).dialConnFor(0xc00086a000, 0xc0005b8630)
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]: created by net/http.(*Transport).queueForDial
	Mar 11 21:42:53 old-k8s-version-239315 kubelet[5456]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 11 21:42:53 old-k8s-version-239315 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 11 21:42:53 old-k8s-version-239315 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 11 21:42:54 old-k8s-version-239315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 11 21:42:54 old-k8s-version-239315 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 11 21:42:54 old-k8s-version-239315 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 11 21:42:54 old-k8s-version-239315 kubelet[5531]: I0311 21:42:54.351956    5531 server.go:416] Version: v1.20.0
	Mar 11 21:42:54 old-k8s-version-239315 kubelet[5531]: I0311 21:42:54.352192    5531 server.go:837] Client rotation is on, will bootstrap in background
	Mar 11 21:42:54 old-k8s-version-239315 kubelet[5531]: I0311 21:42:54.354186    5531 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 11 21:42:54 old-k8s-version-239315 kubelet[5531]: W0311 21:42:54.355598    5531 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 11 21:42:54 old-k8s-version-239315 kubelet[5531]: I0311 21:42:54.355972    5531 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 2 (250.077346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-239315" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (776.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0311 21:39:51.177175   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-324578 -n no-preload-324578
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-11 21:47:56.837962028 +0000 UTC m=+5885.009636331
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-324578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-324578 logs -n 25: (2.071878557s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-427678 sudo cat                              | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo find                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo crio                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-427678                                       | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-124446 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | disable-driver-mounts-124446                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:26 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-766430  | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-324578             | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:30:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:30:01.044166   70908 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:30:01.044254   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044259   70908 out.go:304] Setting ErrFile to fd 2...
	I0311 21:30:01.044263   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044451   70908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:30:01.044970   70908 out.go:298] Setting JSON to false
	I0311 21:30:01.045838   70908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7950,"bootTime":1710184651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:30:01.045894   70908 start.go:139] virtualization: kvm guest
	I0311 21:30:01.048311   70908 out.go:177] * [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:30:01.050003   70908 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:30:01.050011   70908 notify.go:220] Checking for updates...
	I0311 21:30:01.051498   70908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:30:01.052999   70908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:30:01.054439   70908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:30:01.055768   70908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:30:01.057137   70908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:30:01.058760   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:30:01.059167   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.059205   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.073734   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0311 21:30:01.074087   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.074586   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.074618   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.074966   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.075173   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.077005   70908 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 21:30:01.078583   70908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:30:01.078879   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.078914   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.093226   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0311 21:30:01.093614   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.094174   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.094243   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.094616   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.094805   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.128302   70908 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:30:01.129965   70908 start.go:297] selected driver: kvm2
	I0311 21:30:01.129991   70908 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.130113   70908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:30:01.131050   70908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.131115   70908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:30:01.145452   70908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:30:01.145782   70908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:30:01.145811   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:30:01.145819   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:30:01.145863   70908 start.go:340] cluster config:
	{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.145954   70908 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.147725   70908 out.go:177] * Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	I0311 21:30:01.148916   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:30:01.148943   70908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:30:01.148955   70908 cache.go:56] Caching tarball of preloaded images
	I0311 21:30:01.149022   70908 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:30:01.149032   70908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:30:01.149114   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:30:01.149263   70908 start.go:360] acquireMachinesLock for old-k8s-version-239315: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:30:05.352968   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:08.425086   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:14.504922   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:17.577080   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:23.656996   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:26.729009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:32.809042   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:35.881008   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:41.960992   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:45.033096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:51.112925   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:54.184989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:00.265058   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:03.337012   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:09.416960   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:12.489005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:18.569021   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:21.640990   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:27.721019   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:30.793040   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:36.872985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:39.945005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:46.025035   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:49.096988   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:55.176985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:58.249009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:04.328981   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:07.401006   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:13.480986   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:16.552965   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:22.632997   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:25.705064   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:31.784993   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:34.857027   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:40.937002   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:44.008989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:50.088959   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:53.161092   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:59.241045   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:02.313084   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:08.393056   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:11.465079   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:17.545057   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:20.617082   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:26.697000   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:29.768926   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:35.849024   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:38.921096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:41.925305   70458 start.go:364] duration metric: took 4m36.419231792s to acquireMachinesLock for "no-preload-324578"
	I0311 21:33:41.925360   70458 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:33:41.925368   70458 fix.go:54] fixHost starting: 
	I0311 21:33:41.925768   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:33:41.925798   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:33:41.940654   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0311 21:33:41.941130   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:33:41.941619   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:33:41.941646   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:33:41.942045   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:33:41.942209   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:33:41.942370   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:33:41.944009   70458 fix.go:112] recreateIfNeeded on no-preload-324578: state=Stopped err=<nil>
	I0311 21:33:41.944030   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	W0311 21:33:41.944231   70458 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:33:41.946020   70458 out.go:177] * Restarting existing kvm2 VM for "no-preload-324578" ...
	I0311 21:33:41.922711   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:33:41.922754   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923131   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:33:41.923158   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923430   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:33:41.925178   70417 machine.go:97] duration metric: took 4m37.414792129s to provisionDockerMachine
	I0311 21:33:41.925213   70417 fix.go:56] duration metric: took 4m37.435982654s for fixHost
	I0311 21:33:41.925219   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 4m37.436000925s
	W0311 21:33:41.925242   70417 start.go:713] error starting host: provision: host is not running
	W0311 21:33:41.925330   70417 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0311 21:33:41.925343   70417 start.go:728] Will try again in 5 seconds ...
	I0311 21:33:41.947495   70458 main.go:141] libmachine: (no-preload-324578) Calling .Start
	I0311 21:33:41.947676   70458 main.go:141] libmachine: (no-preload-324578) Ensuring networks are active...
	I0311 21:33:41.948386   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network default is active
	I0311 21:33:41.948724   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network mk-no-preload-324578 is active
	I0311 21:33:41.949117   70458 main.go:141] libmachine: (no-preload-324578) Getting domain xml...
	I0311 21:33:41.949876   70458 main.go:141] libmachine: (no-preload-324578) Creating domain...
	I0311 21:33:43.129733   70458 main.go:141] libmachine: (no-preload-324578) Waiting to get IP...
	I0311 21:33:43.130601   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.131006   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.131053   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.130975   71444 retry.go:31] will retry after 209.203314ms: waiting for machine to come up
	I0311 21:33:43.341724   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.342324   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.342361   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.342279   71444 retry.go:31] will retry after 375.396917ms: waiting for machine to come up
	I0311 21:33:43.718906   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.719329   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.719351   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.719288   71444 retry.go:31] will retry after 428.365393ms: waiting for machine to come up
	I0311 21:33:44.148895   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.149334   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.149358   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.149284   71444 retry.go:31] will retry after 561.478535ms: waiting for machine to come up
	I0311 21:33:44.712065   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.712548   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.712576   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.712465   71444 retry.go:31] will retry after 700.993236ms: waiting for machine to come up
	I0311 21:33:46.926379   70417 start.go:360] acquireMachinesLock for default-k8s-diff-port-766430: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:33:45.415695   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:45.416242   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:45.416276   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:45.416215   71444 retry.go:31] will retry after 809.474202ms: waiting for machine to come up
	I0311 21:33:46.227098   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:46.227573   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:46.227608   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:46.227520   71444 retry.go:31] will retry after 1.075187328s: waiting for machine to come up
	I0311 21:33:47.303981   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:47.304454   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:47.304483   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:47.304397   71444 retry.go:31] will retry after 1.145290319s: waiting for machine to come up
	I0311 21:33:48.451871   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:48.452316   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:48.452350   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:48.452267   71444 retry.go:31] will retry after 1.172261063s: waiting for machine to come up
	I0311 21:33:49.626502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:49.627067   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:49.627089   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:49.627023   71444 retry.go:31] will retry after 2.201479026s: waiting for machine to come up
	I0311 21:33:51.831519   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:51.831972   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:51.832008   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:51.831905   71444 retry.go:31] will retry after 2.888101699s: waiting for machine to come up
	I0311 21:33:54.721322   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:54.721753   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:54.721773   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:54.721722   71444 retry.go:31] will retry after 3.512655296s: waiting for machine to come up
	I0311 21:33:58.235767   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:58.236180   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:58.236219   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:58.236141   71444 retry.go:31] will retry after 3.975760652s: waiting for machine to come up
	I0311 21:34:03.525918   70604 start.go:364] duration metric: took 4m44.449252209s to acquireMachinesLock for "embed-certs-743937"
	I0311 21:34:03.525995   70604 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:03.526008   70604 fix.go:54] fixHost starting: 
	I0311 21:34:03.526428   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:03.526470   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:03.542427   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0311 21:34:03.542857   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:03.543292   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:34:03.543317   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:03.543616   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:03.543806   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:03.543991   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:34:03.545366   70604 fix.go:112] recreateIfNeeded on embed-certs-743937: state=Stopped err=<nil>
	I0311 21:34:03.545391   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	W0311 21:34:03.545540   70604 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:03.548158   70604 out.go:177] * Restarting existing kvm2 VM for "embed-certs-743937" ...
	I0311 21:34:03.549803   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Start
	I0311 21:34:03.549966   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring networks are active...
	I0311 21:34:03.550712   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network default is active
	I0311 21:34:03.551124   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network mk-embed-certs-743937 is active
	I0311 21:34:03.551528   70604 main.go:141] libmachine: (embed-certs-743937) Getting domain xml...
	I0311 21:34:03.552226   70604 main.go:141] libmachine: (embed-certs-743937) Creating domain...
	I0311 21:34:02.213709   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214152   70458 main.go:141] libmachine: (no-preload-324578) Found IP for machine: 192.168.39.36
	I0311 21:34:02.214181   70458 main.go:141] libmachine: (no-preload-324578) Reserving static IP address...
	I0311 21:34:02.214196   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has current primary IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214631   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.214655   70458 main.go:141] libmachine: (no-preload-324578) DBG | skip adding static IP to network mk-no-preload-324578 - found existing host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"}
	I0311 21:34:02.214666   70458 main.go:141] libmachine: (no-preload-324578) Reserved static IP address: 192.168.39.36
	I0311 21:34:02.214680   70458 main.go:141] libmachine: (no-preload-324578) Waiting for SSH to be available...
	I0311 21:34:02.214704   70458 main.go:141] libmachine: (no-preload-324578) DBG | Getting to WaitForSSH function...
	I0311 21:34:02.216798   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217068   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.217111   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217285   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH client type: external
	I0311 21:34:02.217316   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa (-rw-------)
	I0311 21:34:02.217356   70458 main.go:141] libmachine: (no-preload-324578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:02.217374   70458 main.go:141] libmachine: (no-preload-324578) DBG | About to run SSH command:
	I0311 21:34:02.217389   70458 main.go:141] libmachine: (no-preload-324578) DBG | exit 0
	I0311 21:34:02.340837   70458 main.go:141] libmachine: (no-preload-324578) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:02.341154   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetConfigRaw
	I0311 21:34:02.341752   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.344368   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344756   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.344791   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344942   70458 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/config.json ...
	I0311 21:34:02.345142   70458 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:02.345159   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:02.345353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.347647   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348001   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.348029   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348118   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.348284   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348432   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.348704   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.348913   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.348925   70458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:02.457273   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:02.457298   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457523   70458 buildroot.go:166] provisioning hostname "no-preload-324578"
	I0311 21:34:02.457554   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457757   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.460347   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460658   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.460688   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460913   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.461126   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461286   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461415   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.461574   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.461758   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.461775   70458 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-324578 && echo "no-preload-324578" | sudo tee /etc/hostname
	I0311 21:34:02.583388   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-324578
	
	I0311 21:34:02.583414   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.586043   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586399   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.586431   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586592   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.586799   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.586957   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.587084   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.587271   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.587433   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.587449   70458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-324578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-324578/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-324578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:02.702365   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:02.702399   70458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:02.702420   70458 buildroot.go:174] setting up certificates
	I0311 21:34:02.702431   70458 provision.go:84] configureAuth start
	I0311 21:34:02.702439   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.702725   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.705459   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.705882   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.705902   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.706048   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.708166   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708476   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.708502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708618   70458 provision.go:143] copyHostCerts
	I0311 21:34:02.708675   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:02.708684   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:02.708764   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:02.708875   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:02.708885   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:02.708911   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:02.708977   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:02.708984   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:02.709005   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:02.709063   70458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.no-preload-324578 san=[127.0.0.1 192.168.39.36 localhost minikube no-preload-324578]
	I0311 21:34:02.823423   70458 provision.go:177] copyRemoteCerts
	I0311 21:34:02.823484   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:02.823508   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.826221   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826538   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.826584   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826743   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.826974   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.827158   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.827344   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:02.912138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:02.938138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:34:02.967391   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:02.992208   70458 provision.go:87] duration metric: took 289.765831ms to configureAuth
	I0311 21:34:02.992232   70458 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:02.992376   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:02.992440   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.994808   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995124   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.995154   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995315   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.995490   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995640   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995818   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.995997   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.996187   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.996202   70458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:03.283611   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:03.283643   70458 machine.go:97] duration metric: took 938.487892ms to provisionDockerMachine
	I0311 21:34:03.283655   70458 start.go:293] postStartSetup for "no-preload-324578" (driver="kvm2")
	I0311 21:34:03.283667   70458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:03.283681   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.284008   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:03.284043   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.286802   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287220   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.287262   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287379   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.287546   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.287731   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.287930   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.372563   70458 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:03.377151   70458 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:03.377171   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:03.377225   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:03.377291   70458 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:03.377377   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:03.387792   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:03.412721   70458 start.go:296] duration metric: took 129.055927ms for postStartSetup
	I0311 21:34:03.412770   70458 fix.go:56] duration metric: took 21.487401487s for fixHost
	I0311 21:34:03.412790   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.415209   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415507   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.415533   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415668   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.415866   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416035   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416179   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.416353   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:03.416502   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:03.416513   70458 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:03.525759   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192843.475283818
	
	I0311 21:34:03.525781   70458 fix.go:216] guest clock: 1710192843.475283818
	I0311 21:34:03.525790   70458 fix.go:229] Guest: 2024-03-11 21:34:03.475283818 +0000 UTC Remote: 2024-03-11 21:34:03.412775346 +0000 UTC m=+298.052241307 (delta=62.508472ms)
	I0311 21:34:03.525815   70458 fix.go:200] guest clock delta is within tolerance: 62.508472ms
	I0311 21:34:03.525833   70458 start.go:83] releasing machines lock for "no-preload-324578", held for 21.600490138s
	I0311 21:34:03.525866   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.526157   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:03.528771   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529117   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.529143   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529272   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529721   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529897   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529978   70458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:03.530022   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.530124   70458 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:03.530151   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.532450   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532624   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532813   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.532843   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533001   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533010   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.533034   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533171   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533197   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533350   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533504   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533506   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.533639   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.614855   70458 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:03.638835   70458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:03.787832   70458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:03.794627   70458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:03.794677   70458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:03.811771   70458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:03.811790   70458 start.go:494] detecting cgroup driver to use...
	I0311 21:34:03.811845   70458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:03.829561   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:03.844536   70458 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:03.844582   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:03.859811   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:03.875041   70458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:03.991456   70458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:04.174783   70458 docker.go:233] disabling docker service ...
	I0311 21:34:04.174848   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:04.192524   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:04.206906   70458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:04.340047   70458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:04.455686   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:04.472512   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:04.495487   70458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:04.495550   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.506921   70458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:04.506997   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.519408   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.531418   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.543684   70458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:04.555846   70458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:04.567610   70458 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:04.567658   70458 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:04.583015   70458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:04.594515   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:04.715185   70458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:04.872750   70458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:04.872848   70458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:04.878207   70458 start.go:562] Will wait 60s for crictl version
	I0311 21:34:04.878250   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.882436   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:04.921007   70458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:04.921079   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.959326   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.997595   70458 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:34:04.999092   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:05.002092   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002526   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:05.002566   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002790   70458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:05.007758   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:05.023330   70458 kubeadm.go:877] updating cluster {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:05.023430   70458 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:34:05.023461   70458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:05.063043   70458 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:34:05.063071   70458 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:05.063161   70458 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.063170   70458 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.063183   70458 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.063190   70458 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.063233   70458 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0311 21:34:05.063171   70458 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.063272   70458 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.063307   70458 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065013   70458 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.065019   70458 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.065020   70458 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.065045   70458 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0311 21:34:05.065017   70458 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.065018   70458 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065064   70458 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.065365   70458 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.209182   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.211431   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.220663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.230965   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.237859   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0311 21:34:05.260820   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.288596   70458 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0311 21:34:05.288651   70458 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.288697   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.324896   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.342987   70458 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0311 21:34:05.343030   70458 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.343080   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.371663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.377262   70458 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0311 21:34:05.377306   70458 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.377349   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.792889   70604 main.go:141] libmachine: (embed-certs-743937) Waiting to get IP...
	I0311 21:34:04.793678   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:04.794097   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:04.794152   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:04.794064   71579 retry.go:31] will retry after 281.522937ms: waiting for machine to come up
	I0311 21:34:05.077518   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.077856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.077889   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.077814   71579 retry.go:31] will retry after 303.836522ms: waiting for machine to come up
	I0311 21:34:05.383244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.383796   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.383839   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.383758   71579 retry.go:31] will retry after 333.172379ms: waiting for machine to come up
	I0311 21:34:05.718117   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.718603   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.718630   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.718562   71579 retry.go:31] will retry after 469.046827ms: waiting for machine to come up
	I0311 21:34:06.189304   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.189748   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.189777   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.189705   71579 retry.go:31] will retry after 636.781259ms: waiting for machine to come up
	I0311 21:34:06.828672   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.829136   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.829174   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.829078   71579 retry.go:31] will retry after 758.609427ms: waiting for machine to come up
	I0311 21:34:07.589134   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:07.589490   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:07.589513   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:07.589466   71579 retry.go:31] will retry after 990.575872ms: waiting for machine to come up
	I0311 21:34:08.581971   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:08.582312   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:08.582344   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:08.582290   71579 retry.go:31] will retry after 1.142377902s: waiting for machine to come up
	I0311 21:34:05.421288   70458 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0311 21:34:05.421340   70458 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.421390   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473450   70458 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0311 21:34:05.473497   70458 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.473527   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.473545   70458 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0311 21:34:05.473584   70458 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.473603   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.473639   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473663   70458 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0311 21:34:05.473701   70458 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.473707   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.473730   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473766   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.569510   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.569615   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.578915   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0311 21:34:05.578979   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579007   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:05.579029   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.579077   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579117   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.579158   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.579209   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0311 21:34:05.579272   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:05.584413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0311 21:34:05.584425   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.584458   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.679191   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 21:34:05.679259   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0311 21:34:05.679288   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:05.679337   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679368   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0311 21:34:05.679369   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0311 21:34:05.679414   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679428   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0311 21:34:05.679485   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:07.621341   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.942028932s)
	I0311 21:34:07.621382   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0311 21:34:07.621385   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.941873405s)
	I0311 21:34:07.621413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621424   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (1.941989707s)
	I0311 21:34:07.621452   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621544   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.037072472s)
	I0311 21:34:07.621558   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0311 21:34:07.621580   70458 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:07.621627   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:09.726761   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:09.727207   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:09.727241   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:09.727153   71579 retry.go:31] will retry after 1.17092616s: waiting for machine to come up
	I0311 21:34:10.899311   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:10.899656   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:10.899675   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:10.899631   71579 retry.go:31] will retry after 1.870900402s: waiting for machine to come up
	I0311 21:34:12.771931   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:12.772421   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:12.772457   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:12.772375   71579 retry.go:31] will retry after 2.721804623s: waiting for machine to come up
	I0311 21:34:11.524646   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.902991705s)
	I0311 21:34:11.524683   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0311 21:34:11.524711   70458 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:11.524787   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:13.704750   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.179921724s)
	I0311 21:34:13.704786   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0311 21:34:13.704817   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:13.704868   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:15.496186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:15.496686   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:15.496722   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:15.496627   71579 retry.go:31] will retry after 2.568850361s: waiting for machine to come up
	I0311 21:34:18.068470   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:18.068926   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:18.068959   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:18.068872   71579 retry.go:31] will retry after 4.111366971s: waiting for machine to come up
	I0311 21:34:16.267427   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.562528088s)
	I0311 21:34:16.267458   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0311 21:34:16.267486   70458 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:16.267535   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:17.218029   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 21:34:17.218065   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:17.218104   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:18.987120   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.768996335s)
	I0311 21:34:18.987149   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0311 21:34:18.987167   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:18.987219   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:23.543571   70908 start.go:364] duration metric: took 4m22.394278247s to acquireMachinesLock for "old-k8s-version-239315"
	I0311 21:34:23.543649   70908 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:23.543661   70908 fix.go:54] fixHost starting: 
	I0311 21:34:23.544084   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:23.544139   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:23.561669   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0311 21:34:23.562158   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:23.562618   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:34:23.562645   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:23.562949   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:23.563114   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:23.563306   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetState
	I0311 21:34:23.565152   70908 fix.go:112] recreateIfNeeded on old-k8s-version-239315: state=Stopped err=<nil>
	I0311 21:34:23.565178   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	W0311 21:34:23.565351   70908 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:23.567943   70908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239315" ...
	I0311 21:34:22.182707   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183200   70604 main.go:141] libmachine: (embed-certs-743937) Found IP for machine: 192.168.50.114
	I0311 21:34:22.183228   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has current primary IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183238   70604 main.go:141] libmachine: (embed-certs-743937) Reserving static IP address...
	I0311 21:34:22.183694   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.183716   70604 main.go:141] libmachine: (embed-certs-743937) DBG | skip adding static IP to network mk-embed-certs-743937 - found existing host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"}
	I0311 21:34:22.183728   70604 main.go:141] libmachine: (embed-certs-743937) Reserved static IP address: 192.168.50.114
	I0311 21:34:22.183746   70604 main.go:141] libmachine: (embed-certs-743937) Waiting for SSH to be available...
	I0311 21:34:22.183760   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Getting to WaitForSSH function...
	I0311 21:34:22.185820   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186157   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.186193   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186285   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH client type: external
	I0311 21:34:22.186317   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa (-rw-------)
	I0311 21:34:22.186349   70604 main.go:141] libmachine: (embed-certs-743937) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:22.186368   70604 main.go:141] libmachine: (embed-certs-743937) DBG | About to run SSH command:
	I0311 21:34:22.186384   70604 main.go:141] libmachine: (embed-certs-743937) DBG | exit 0
	I0311 21:34:22.313253   70604 main.go:141] libmachine: (embed-certs-743937) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:22.313570   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetConfigRaw
	I0311 21:34:22.314271   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.317040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317404   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.317509   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317641   70604 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/config.json ...
	I0311 21:34:22.317814   70604 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:22.317830   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:22.318049   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.320550   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320833   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.320859   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320992   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.321223   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321405   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321547   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.321708   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.321930   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.321944   70604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:22.430028   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:22.430055   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430345   70604 buildroot.go:166] provisioning hostname "embed-certs-743937"
	I0311 21:34:22.430374   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430568   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.433555   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.433884   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.433907   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.434102   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.434311   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434474   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434611   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.434762   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.434936   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.434954   70604 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-743937 && echo "embed-certs-743937" | sudo tee /etc/hostname
	I0311 21:34:22.564819   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-743937
	
	I0311 21:34:22.564848   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.567667   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568075   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.568122   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568325   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.568519   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568719   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568913   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.569094   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.569335   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.569361   70604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-743937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-743937/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-743937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:22.684397   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:22.684425   70604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:22.684473   70604 buildroot.go:174] setting up certificates
	I0311 21:34:22.684490   70604 provision.go:84] configureAuth start
	I0311 21:34:22.684507   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.684840   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.687805   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688156   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.688178   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688401   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.690975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691302   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.691321   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691469   70604 provision.go:143] copyHostCerts
	I0311 21:34:22.691528   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:22.691540   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:22.691598   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:22.691690   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:22.691706   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:22.691729   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:22.691834   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:22.691850   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:22.691878   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:22.691946   70604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.embed-certs-743937 san=[127.0.0.1 192.168.50.114 embed-certs-743937 localhost minikube]
	I0311 21:34:22.838395   70604 provision.go:177] copyRemoteCerts
	I0311 21:34:22.838452   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:22.838478   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.840975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841308   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.841342   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841487   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.841684   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.841834   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.841968   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:22.924202   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:22.956079   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 21:34:22.982352   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 21:34:23.008286   70604 provision.go:87] duration metric: took 323.780619ms to configureAuth
	I0311 21:34:23.008311   70604 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:23.008481   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:34:23.008553   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.011128   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011439   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.011461   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011632   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.011780   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.011919   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.012094   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.012278   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.012436   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.012452   70604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:23.288122   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:23.288146   70604 machine.go:97] duration metric: took 970.321311ms to provisionDockerMachine
	I0311 21:34:23.288157   70604 start.go:293] postStartSetup for "embed-certs-743937" (driver="kvm2")
	I0311 21:34:23.288167   70604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:23.288180   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.288496   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:23.288532   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.291434   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.291823   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.291856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.292079   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.292297   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.292468   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.292629   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.376367   70604 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:23.381629   70604 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:23.381660   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:23.381754   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:23.381855   70604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:23.381967   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:23.392280   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:23.423241   70604 start.go:296] duration metric: took 135.071082ms for postStartSetup
	I0311 21:34:23.423283   70604 fix.go:56] duration metric: took 19.897275281s for fixHost
	I0311 21:34:23.423310   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.426264   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426623   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.426652   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426862   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.427052   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427256   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427419   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.427575   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.427809   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.427822   70604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:23.543425   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192863.499269756
	
	I0311 21:34:23.543447   70604 fix.go:216] guest clock: 1710192863.499269756
	I0311 21:34:23.543454   70604 fix.go:229] Guest: 2024-03-11 21:34:23.499269756 +0000 UTC Remote: 2024-03-11 21:34:23.423289031 +0000 UTC m=+304.494814333 (delta=75.980725ms)
	I0311 21:34:23.543472   70604 fix.go:200] guest clock delta is within tolerance: 75.980725ms
	I0311 21:34:23.543478   70604 start.go:83] releasing machines lock for "embed-certs-743937", held for 20.0175167s
	I0311 21:34:23.543504   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.543746   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:23.546763   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547188   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.547223   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547396   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.547882   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548077   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548163   70604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:23.548226   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.548282   70604 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:23.548309   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.551186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551485   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551609   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.551642   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551795   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.551979   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.552001   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.552035   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552146   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.552211   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552277   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552368   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.552501   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552666   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.660064   70604 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:23.668731   70604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:23.831784   70604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:23.840331   70604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:23.840396   70604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:23.864730   70604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:23.864766   70604 start.go:494] detecting cgroup driver to use...
	I0311 21:34:23.864831   70604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:23.886072   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:23.901660   70604 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:23.901727   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:23.917374   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:23.932525   70604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:24.066368   70604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:24.222425   70604 docker.go:233] disabling docker service ...
	I0311 21:34:24.222487   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:24.240937   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:24.257050   70604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:24.395003   70604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:24.550709   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:24.572524   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:24.599710   70604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:24.599776   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.612426   70604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:24.612514   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.626989   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.639576   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.653711   70604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:24.673581   70604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:24.684772   70604 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:24.684841   70604 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:24.707855   70604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:24.719801   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:24.904788   70604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:25.063437   70604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:25.063511   70604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:25.070294   70604 start.go:562] Will wait 60s for crictl version
	I0311 21:34:25.070352   70604 ssh_runner.go:195] Run: which crictl
	I0311 21:34:25.074945   70604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:25.121979   70604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:25.122070   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.159092   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.207391   70604 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:34:21.469205   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.481954559s)
	I0311 21:34:21.469242   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0311 21:34:21.469285   70458 cache_images.go:123] Successfully loaded all cached images
	I0311 21:34:21.469295   70458 cache_images.go:92] duration metric: took 16.40620232s to LoadCachedImages
	I0311 21:34:21.469306   70458 kubeadm.go:928] updating node { 192.168.39.36 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:34:21.469436   70458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-324578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:21.469513   70458 ssh_runner.go:195] Run: crio config
	I0311 21:34:21.531635   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:21.531659   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:21.531671   70458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:21.531690   70458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-324578 NodeName:no-preload-324578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:21.531820   70458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-324578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:21.531876   70458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:34:21.546000   70458 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:21.546060   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:21.558818   70458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0311 21:34:21.577685   70458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:34:21.595960   70458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0311 21:34:21.615003   70458 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:21.619290   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:21.633307   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:21.751586   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:21.771672   70458 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578 for IP: 192.168.39.36
	I0311 21:34:21.771698   70458 certs.go:194] generating shared ca certs ...
	I0311 21:34:21.771717   70458 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:21.771907   70458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:21.771975   70458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:21.771987   70458 certs.go:256] generating profile certs ...
	I0311 21:34:21.772093   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/client.key
	I0311 21:34:21.772190   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key.681a9200
	I0311 21:34:21.772244   70458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key
	I0311 21:34:21.772371   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:21.772421   70458 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:21.772435   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:21.772475   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:21.772509   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:21.772542   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:21.772606   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:21.773241   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:21.833566   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:21.868156   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:21.910118   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:21.952222   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:34:21.988148   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:22.018493   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:22.045225   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:22.071481   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:22.097525   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:22.123425   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:22.156613   70458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:22.174679   70458 ssh_runner.go:195] Run: openssl version
	I0311 21:34:22.181137   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:22.197490   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203508   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203556   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.210822   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:22.224269   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:22.237282   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242762   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242816   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.249334   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:22.261866   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:22.273674   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279004   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279055   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.285394   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:22.299493   70458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:22.304827   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:22.311349   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:22.318377   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:22.325621   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:22.332316   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:22.338893   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:22.345167   70458 kubeadm.go:391] StartCluster: {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:22.345246   70458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:22.345286   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.386703   70458 cri.go:89] found id: ""
	I0311 21:34:22.386785   70458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:22.398475   70458 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:22.398494   70458 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:22.398500   70458 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:22.398558   70458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:22.409434   70458 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:22.410675   70458 kubeconfig.go:125] found "no-preload-324578" server: "https://192.168.39.36:8443"
	I0311 21:34:22.412906   70458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:22.423677   70458 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.36
	I0311 21:34:22.423708   70458 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:22.423719   70458 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:22.423762   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.472548   70458 cri.go:89] found id: ""
	I0311 21:34:22.472615   70458 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:22.494701   70458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:22.506944   70458 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:22.506964   70458 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:22.507015   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:22.517468   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:22.517521   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:22.528281   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:22.538496   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:22.538533   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:22.553009   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.566120   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:22.566189   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.579239   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:22.590180   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:22.590227   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:22.602988   70458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:22.615631   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:22.730568   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.355205   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.588923   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.694870   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.796820   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:23.796918   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.297341   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.797197   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.840030   70458 api_server.go:72] duration metric: took 1.043209284s to wait for apiserver process to appear ...
	I0311 21:34:24.840062   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:24.840101   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:24.840560   70458 api_server.go:269] stopped: https://192.168.39.36:8443/healthz: Get "https://192.168.39.36:8443/healthz": dial tcp 192.168.39.36:8443: connect: connection refused
	I0311 21:34:25.341161   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:23.569356   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .Start
	I0311 21:34:23.569527   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring networks are active...
	I0311 21:34:23.570188   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network default is active
	I0311 21:34:23.570613   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network mk-old-k8s-version-239315 is active
	I0311 21:34:23.571070   70908 main.go:141] libmachine: (old-k8s-version-239315) Getting domain xml...
	I0311 21:34:23.571836   70908 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:34:24.895619   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting to get IP...
	I0311 21:34:24.896680   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:24.897160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:24.897218   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:24.897131   71714 retry.go:31] will retry after 268.563191ms: waiting for machine to come up
	I0311 21:34:25.167783   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.168312   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.168343   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.168268   71714 retry.go:31] will retry after 245.059124ms: waiting for machine to come up
	I0311 21:34:25.414644   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.415139   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.415168   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.415100   71714 retry.go:31] will retry after 407.807793ms: waiting for machine to come up
	I0311 21:34:25.824887   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.825351   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.825379   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.825274   71714 retry.go:31] will retry after 503.187834ms: waiting for machine to come up
	I0311 21:34:25.208819   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:25.211726   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212203   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:25.212244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212486   70604 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:25.217365   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:25.233670   70604 kubeadm.go:877] updating cluster {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:25.233825   70604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:34:25.233886   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:25.282028   70604 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:34:25.282108   70604 ssh_runner.go:195] Run: which lz4
	I0311 21:34:25.287047   70604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:25.291721   70604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:25.291751   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:34:27.414481   70604 crio.go:444] duration metric: took 2.127464595s to copy over tarball
	I0311 21:34:27.414554   70604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:28.225996   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.226031   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.226048   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.285274   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.285307   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.340493   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.512353   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.512409   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:28.840800   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.852523   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.852560   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.341135   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.354997   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:29.355028   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.840769   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.848023   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:34:29.856262   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:34:29.856290   70458 api_server.go:131] duration metric: took 5.016219789s to wait for apiserver health ...
	I0311 21:34:29.856300   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:29.856308   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:29.858297   70458 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:29.859734   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:29.891375   70458 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:29.932393   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:29.959208   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:29.959257   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:29.959268   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:29.959278   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:29.959290   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:29.959296   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:34:29.959303   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:29.959319   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:29.959335   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:34:29.959343   70458 system_pods.go:74] duration metric: took 26.926978ms to wait for pod list to return data ...
	I0311 21:34:29.959355   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:29.963151   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:29.963179   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:29.963193   70458 node_conditions.go:105] duration metric: took 3.825246ms to run NodePressure ...
	I0311 21:34:29.963209   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:26.330005   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:26.330547   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:26.330569   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:26.330464   71714 retry.go:31] will retry after 723.914956ms: waiting for machine to come up
	I0311 21:34:27.056271   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.056879   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.056901   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.056834   71714 retry.go:31] will retry after 693.583075ms: waiting for machine to come up
	I0311 21:34:27.752514   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.752958   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.752980   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.752916   71714 retry.go:31] will retry after 902.247864ms: waiting for machine to come up
	I0311 21:34:28.657551   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:28.658023   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:28.658079   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:28.658008   71714 retry.go:31] will retry after 1.140425887s: waiting for machine to come up
	I0311 21:34:29.800305   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:29.800824   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:29.800852   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:29.800774   71714 retry.go:31] will retry after 1.68593342s: waiting for machine to come up
	I0311 21:34:32.367999   70458 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.404768175s)
	I0311 21:34:32.368034   70458 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375444   70458 kubeadm.go:733] kubelet initialised
	I0311 21:34:32.375468   70458 kubeadm.go:734] duration metric: took 7.423643ms waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375477   70458 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:32.383579   70458 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.389728   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389755   70458 pod_ready.go:81] duration metric: took 6.144226ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.389766   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389775   70458 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.398797   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398822   70458 pod_ready.go:81] duration metric: took 9.033188ms for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.398833   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398841   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.407870   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407905   70458 pod_ready.go:81] duration metric: took 9.056349ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.407915   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407928   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.414434   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414455   70458 pod_ready.go:81] duration metric: took 6.519611ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.414463   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414468   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.771994   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772025   70458 pod_ready.go:81] duration metric: took 357.549783ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.772034   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772041   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.175562   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175595   70458 pod_ready.go:81] duration metric: took 403.546508ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.175608   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175617   70458 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.573749   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573777   70458 pod_ready.go:81] duration metric: took 398.141162ms for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.573789   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573799   70458 pod_ready.go:38] duration metric: took 1.198311127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:33.573862   70458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:34:33.592112   70458 ops.go:34] apiserver oom_adj: -16
	I0311 21:34:33.592148   70458 kubeadm.go:591] duration metric: took 11.193640837s to restartPrimaryControlPlane
	I0311 21:34:33.592161   70458 kubeadm.go:393] duration metric: took 11.247001751s to StartCluster
	I0311 21:34:33.592181   70458 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.592269   70458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:33.594144   70458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.594461   70458 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:34:33.596303   70458 out.go:177] * Verifying Kubernetes components...
	I0311 21:34:33.594553   70458 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:34:33.594702   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:33.597724   70458 addons.go:69] Setting default-storageclass=true in profile "no-preload-324578"
	I0311 21:34:33.597727   70458 addons.go:69] Setting storage-provisioner=true in profile "no-preload-324578"
	I0311 21:34:33.597739   70458 addons.go:69] Setting metrics-server=true in profile "no-preload-324578"
	I0311 21:34:33.597759   70458 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-324578"
	I0311 21:34:33.597771   70458 addons.go:234] Setting addon storage-provisioner=true in "no-preload-324578"
	I0311 21:34:33.597772   70458 addons.go:234] Setting addon metrics-server=true in "no-preload-324578"
	W0311 21:34:33.597780   70458 addons.go:243] addon storage-provisioner should already be in state true
	W0311 21:34:33.597795   70458 addons.go:243] addon metrics-server should already be in state true
	I0311 21:34:33.597828   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597838   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597733   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:33.598079   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598110   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598224   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598260   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598305   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598269   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.613473   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0311 21:34:33.613994   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.614558   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.614576   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.614946   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.615385   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.615415   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.618026   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0311 21:34:33.618201   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0311 21:34:33.618370   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618497   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618818   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618833   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.618978   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618989   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.619157   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619343   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.619389   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619926   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.619956   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.623211   70458 addons.go:234] Setting addon default-storageclass=true in "no-preload-324578"
	W0311 21:34:33.623232   70458 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:34:33.623260   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.623634   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.623660   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.635263   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I0311 21:34:33.635575   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.636071   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.636080   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.636462   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.636606   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.638520   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.640583   70458 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:34:33.642029   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:34:33.642045   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:34:33.642058   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.640562   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0311 21:34:33.641020   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0311 21:34:33.642572   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.643082   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.643107   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.643432   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.644002   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.644030   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.644213   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.644711   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.644733   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.645120   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.645334   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.645406   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.645861   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.645888   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.646042   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.646332   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.646548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.646719   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.646986   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.648681   70458 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:30.659466   70604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.244884989s)
	I0311 21:34:30.659492   70604 crio.go:451] duration metric: took 3.244983149s to extract the tarball
	I0311 21:34:30.659500   70604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:30.708661   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:30.769502   70604 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:34:30.769530   70604 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:34:30.769540   70604 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.28.4 crio true true} ...
	I0311 21:34:30.769675   70604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-743937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:30.769757   70604 ssh_runner.go:195] Run: crio config
	I0311 21:34:30.820223   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:30.820251   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:30.820267   70604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:30.820296   70604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-743937 NodeName:embed-certs-743937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:30.820475   70604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-743937"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:30.820563   70604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:34:30.833086   70604 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:30.833175   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:30.844335   70604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0311 21:34:30.863586   70604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:30.883598   70604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0311 21:34:30.904711   70604 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:30.909433   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:30.924054   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:31.064573   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:31.096931   70604 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937 for IP: 192.168.50.114
	I0311 21:34:31.096960   70604 certs.go:194] generating shared ca certs ...
	I0311 21:34:31.096980   70604 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:31.097157   70604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:31.097220   70604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:31.097236   70604 certs.go:256] generating profile certs ...
	I0311 21:34:31.097368   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/client.key
	I0311 21:34:31.097453   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key.c230aed9
	I0311 21:34:31.097520   70604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key
	I0311 21:34:31.097660   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:31.097709   70604 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:31.097770   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:31.097826   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:31.097867   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:31.097899   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:31.097958   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:31.098771   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:31.135109   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:31.173483   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:31.215059   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:31.253244   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0311 21:34:31.305450   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:31.340238   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:31.366993   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:31.393936   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:31.420998   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:31.446500   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:31.474047   70604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:31.493935   70604 ssh_runner.go:195] Run: openssl version
	I0311 21:34:31.500607   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:31.513874   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519255   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519303   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.525967   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:31.538995   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:31.551625   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557235   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557292   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.563658   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:31.576689   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:31.589299   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594405   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594453   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.601041   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:31.619307   70604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:31.624565   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:31.632121   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:31.638843   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:31.646400   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:31.652701   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:31.659661   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:31.666390   70604 kubeadm.go:391] StartCluster: {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:31.666496   70604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:31.666546   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.716714   70604 cri.go:89] found id: ""
	I0311 21:34:31.716796   70604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:31.733945   70604 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:31.733967   70604 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:31.733974   70604 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:31.734019   70604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:31.746543   70604 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:31.747720   70604 kubeconfig.go:125] found "embed-certs-743937" server: "https://192.168.50.114:8443"
	I0311 21:34:31.749670   70604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:31.762374   70604 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0311 21:34:31.762401   70604 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:31.762410   70604 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:31.762462   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.811965   70604 cri.go:89] found id: ""
	I0311 21:34:31.812055   70604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:31.836539   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:31.849272   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:31.849295   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:31.849348   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:31.861345   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:31.861423   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:31.875436   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:31.887183   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:31.887251   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:31.900032   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.911614   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:31.911690   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.924791   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:31.937131   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:31.937204   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:31.949123   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:31.960234   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.089622   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.806370   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.033263   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.135981   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.248827   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:33.248917   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.749207   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.650190   70458 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:33.650207   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:34:33.650223   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.653451   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.653895   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.653920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.654131   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.654302   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.654472   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.654631   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.689121   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0311 21:34:33.689487   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.693084   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.693105   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.693596   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.693796   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.696074   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.696629   70458 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:33.696644   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:34:33.696662   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.699920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700323   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.700342   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700564   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.700756   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.700859   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.700932   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.896331   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:33.969322   70458 node_ready.go:35] waiting up to 6m0s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:34.037114   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:34.059051   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:34:34.059080   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:34:34.094822   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:34.142231   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:34:34.142259   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:34:34.218979   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:34.219002   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:34:34.260381   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:35.648210   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.61103949s)
	I0311 21:34:35.648241   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.553388189s)
	I0311 21:34:35.648344   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648381   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648367   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648409   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648658   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.648675   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.648685   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648694   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648754   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.648997   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.649019   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650050   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650068   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650091   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.650101   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.650367   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650384   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.658738   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.658764   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.658991   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.659007   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687393   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.426969773s)
	I0311 21:34:35.687453   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687467   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687771   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.687810   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687828   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687848   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687831   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688142   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688164   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.688178   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.688214   70458 addons.go:470] Verifying addon metrics-server=true in "no-preload-324578"
	I0311 21:34:35.690413   70458 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:34:31.488010   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:31.488449   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:31.488471   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:31.488421   71714 retry.go:31] will retry after 2.325869089s: waiting for machine to come up
	I0311 21:34:33.815568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:33.816215   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:33.816236   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:33.816176   71714 retry.go:31] will retry after 2.457084002s: waiting for machine to come up
	I0311 21:34:34.249462   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.749177   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.778830   70604 api_server.go:72] duration metric: took 1.530004395s to wait for apiserver process to appear ...
	I0311 21:34:34.778858   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:34.778879   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:34.779469   70604 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0311 21:34:35.279027   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.110193   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.110221   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.110234   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.159861   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.159909   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.279045   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.289460   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.289491   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:38.779423   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.785174   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.785206   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.278910   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.290017   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:39.290054   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.779616   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.786362   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:34:39.794557   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:34:39.794583   70604 api_server.go:131] duration metric: took 5.01571788s to wait for apiserver health ...
	I0311 21:34:39.794594   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:39.794601   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:39.796063   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:35.691844   70458 addons.go:505] duration metric: took 2.097304232s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:34:35.974533   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:37.983073   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:38.977713   70458 node_ready.go:49] node "no-preload-324578" has status "Ready":"True"
	I0311 21:34:38.977738   70458 node_ready.go:38] duration metric: took 5.008382488s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:38.977749   70458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:38.986414   70458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993430   70458 pod_ready.go:92] pod "coredns-76f75df574-s6lsb" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:38.993454   70458 pod_ready.go:81] duration metric: took 7.012539ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993465   70458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:36.274640   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:36.275119   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:36.275157   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:36.275064   71714 retry.go:31] will retry after 3.618026102s: waiting for machine to come up
	I0311 21:34:39.894877   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:39.895397   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:39.895447   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:39.895343   71714 retry.go:31] will retry after 3.826847061s: waiting for machine to come up
	I0311 21:34:39.797420   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:39.810877   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:39.836773   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:39.852496   70604 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:39.852541   70604 system_pods.go:61] "coredns-5dd5756b68-czng9" [a57d0643-36c5-44e2-a113-de051d0e0408] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:39.852556   70604 system_pods.go:61] "etcd-embed-certs-743937" [9f0051e8-247f-4968-a834-c38c5f0c4407] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:39.852567   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [4ac979a6-1906-4a58-9d41-9587d66d81ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:39.852578   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [263ba100-e911-4857-a973-c4dc9312a653] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:39.852591   70604 system_pods.go:61] "kube-proxy-n2qzt" [21f56cfb-a3f5-4c4b-993d-53b6d8f60ec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:34:39.852600   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [0121fa4d-91a8-432b-9f21-c6e8c0b33872] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:39.852606   70604 system_pods.go:61] "metrics-server-57f55c9bc5-7qw98" [3d3f2e87-2e36-4ca3-b31c-fc5f38251f03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:39.852617   70604 system_pods.go:61] "storage-provisioner" [72fd13c7-1a79-4e8a-bdc2-f45117599d85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:34:39.852624   70604 system_pods.go:74] duration metric: took 15.823708ms to wait for pod list to return data ...
	I0311 21:34:39.852634   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:39.856288   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:39.856309   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:39.856317   70604 node_conditions.go:105] duration metric: took 3.676347ms to run NodePressure ...
	I0311 21:34:39.856331   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:40.103882   70604 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108726   70604 kubeadm.go:733] kubelet initialised
	I0311 21:34:40.108758   70604 kubeadm.go:734] duration metric: took 4.847245ms waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108768   70604 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:40.115566   70604 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:42.124435   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.026187   70417 start.go:364] duration metric: took 58.09976601s to acquireMachinesLock for "default-k8s-diff-port-766430"
	I0311 21:34:45.026231   70417 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:45.026242   70417 fix.go:54] fixHost starting: 
	I0311 21:34:45.026632   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:45.026661   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:45.046341   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0311 21:34:45.046779   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:45.047336   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:34:45.047375   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:45.047741   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:45.047920   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:34:45.048090   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:34:45.049581   70417 fix.go:112] recreateIfNeeded on default-k8s-diff-port-766430: state=Stopped err=<nil>
	I0311 21:34:45.049605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	W0311 21:34:45.049759   70417 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:45.051505   70417 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-766430" ...
	I0311 21:34:41.001474   70458 pod_ready.go:102] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:43.500991   70458 pod_ready.go:92] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.501018   70458 pod_ready.go:81] duration metric: took 4.507545237s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.501030   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506732   70458 pod_ready.go:92] pod "kube-apiserver-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.506753   70458 pod_ready.go:81] duration metric: took 5.714866ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506764   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511432   70458 pod_ready.go:92] pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.511456   70458 pod_ready.go:81] duration metric: took 4.684021ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511469   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516333   70458 pod_ready.go:92] pod "kube-proxy-rmz4b" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.516360   70458 pod_ready.go:81] duration metric: took 4.882955ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516370   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521501   70458 pod_ready.go:92] pod "kube-scheduler-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.521524   70458 pod_ready.go:81] duration metric: took 5.146945ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521532   70458 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.723851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724335   70908 main.go:141] libmachine: (old-k8s-version-239315) Found IP for machine: 192.168.72.52
	I0311 21:34:43.724367   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserving static IP address...
	I0311 21:34:43.724382   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has current primary IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724722   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.724759   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserved static IP address: 192.168.72.52
	I0311 21:34:43.724774   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | skip adding static IP to network mk-old-k8s-version-239315 - found existing host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"}
	I0311 21:34:43.724797   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Getting to WaitForSSH function...
	I0311 21:34:43.724815   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting for SSH to be available...
	I0311 21:34:43.727015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727330   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.727354   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727541   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH client type: external
	I0311 21:34:43.727568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa (-rw-------)
	I0311 21:34:43.727624   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:43.727641   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | About to run SSH command:
	I0311 21:34:43.727651   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | exit 0
	I0311 21:34:43.848884   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:43.849287   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:34:43.850084   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:43.852942   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853529   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.853572   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853801   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:34:43.854001   70908 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:43.854024   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:43.854255   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.856623   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857153   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.857187   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857321   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.857516   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857702   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857897   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.858105   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.858332   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.858349   70908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:43.961617   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:43.961664   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.961921   70908 buildroot.go:166] provisioning hostname "old-k8s-version-239315"
	I0311 21:34:43.961945   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.962134   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.964672   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.964987   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.965015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.965122   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.965305   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965466   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965591   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.965801   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.966042   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.966055   70908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239315 && echo "old-k8s-version-239315" | sudo tee /etc/hostname
	I0311 21:34:44.088097   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239315
	
	I0311 21:34:44.088126   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.090911   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091167   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.091205   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091347   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.091524   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091680   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091818   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.091984   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.092185   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.092205   70908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239315/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:44.207643   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:44.207674   70908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:44.207693   70908 buildroot.go:174] setting up certificates
	I0311 21:34:44.207701   70908 provision.go:84] configureAuth start
	I0311 21:34:44.207710   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:44.207975   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:44.211160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211556   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.211588   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211754   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.214211   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214553   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.214585   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214732   70908 provision.go:143] copyHostCerts
	I0311 21:34:44.214797   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:44.214813   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:44.214886   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:44.214991   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:44.215005   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:44.215035   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:44.215160   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:44.215171   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:44.215198   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:44.215267   70908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239315 san=[127.0.0.1 192.168.72.52 localhost minikube old-k8s-version-239315]
	I0311 21:34:44.305250   70908 provision.go:177] copyRemoteCerts
	I0311 21:34:44.305329   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:44.305367   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.308244   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308636   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.308673   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308874   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.309092   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.309290   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.309446   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.394958   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:44.423314   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 21:34:44.459338   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:44.491201   70908 provision.go:87] duration metric: took 283.487383ms to configureAuth
	I0311 21:34:44.491232   70908 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:44.491419   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:34:44.491484   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.494039   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494476   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.494509   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494638   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.494830   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.494998   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.495175   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.495366   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.495548   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.495570   70908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:44.787935   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:44.787961   70908 machine.go:97] duration metric: took 933.945971ms to provisionDockerMachine
	I0311 21:34:44.787971   70908 start.go:293] postStartSetup for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:34:44.787983   70908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:44.788007   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:44.788327   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:44.788355   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.791133   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791460   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.791492   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791637   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.791858   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.792021   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.792165   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.877163   70908 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:44.882141   70908 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:44.882164   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:44.882241   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:44.882330   70908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:44.882442   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:44.894699   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:44.919809   70908 start.go:296] duration metric: took 131.8264ms for postStartSetup
	I0311 21:34:44.919848   70908 fix.go:56] duration metric: took 21.376188092s for fixHost
	I0311 21:34:44.919867   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.922414   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922708   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.922738   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922876   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.923075   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923274   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923455   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.923618   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.923806   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.923831   70908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:45.026068   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192885.004450463
	
	I0311 21:34:45.026088   70908 fix.go:216] guest clock: 1710192885.004450463
	I0311 21:34:45.026096   70908 fix.go:229] Guest: 2024-03-11 21:34:45.004450463 +0000 UTC Remote: 2024-03-11 21:34:44.919851167 +0000 UTC m=+283.922086595 (delta=84.599296ms)
	I0311 21:34:45.026118   70908 fix.go:200] guest clock delta is within tolerance: 84.599296ms
	I0311 21:34:45.026124   70908 start.go:83] releasing machines lock for "old-k8s-version-239315", held for 21.482500591s
	I0311 21:34:45.026158   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.026440   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:45.029366   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029778   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.029813   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029992   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030514   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030711   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030800   70908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:45.030846   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.030946   70908 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:45.030971   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.033851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.033989   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034264   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034292   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034324   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034348   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034429   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034618   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034633   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034799   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034814   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034979   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034977   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.035143   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.135748   70908 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:45.142408   70908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:45.297445   70908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:45.304482   70908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:45.304552   70908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:45.322754   70908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:45.322775   70908 start.go:494] detecting cgroup driver to use...
	I0311 21:34:45.322832   70908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:45.345988   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:45.363267   70908 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:45.363320   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:45.380892   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:45.396972   70908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:45.531640   70908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:45.700243   70908 docker.go:233] disabling docker service ...
	I0311 21:34:45.700306   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:45.730542   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:45.749068   70908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:45.903721   70908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:46.045122   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:46.065278   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:46.090726   70908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:34:46.090779   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.105783   70908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:46.105841   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.121702   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.136262   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.150628   70908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:46.163771   70908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:46.175613   70908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:46.175675   70908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:46.193848   70908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:46.205694   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:46.344832   70908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:46.501773   70908 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:46.501851   70908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:46.507932   70908 start.go:562] Will wait 60s for crictl version
	I0311 21:34:46.507988   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:46.512337   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:46.555165   70908 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:46.555249   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.588554   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.623785   70908 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:34:44.627149   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.128405   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.052882   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Start
	I0311 21:34:45.053039   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring networks are active...
	I0311 21:34:45.053710   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network default is active
	I0311 21:34:45.054156   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network mk-default-k8s-diff-port-766430 is active
	I0311 21:34:45.054499   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Getting domain xml...
	I0311 21:34:45.055347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Creating domain...
	I0311 21:34:46.378216   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting to get IP...
	I0311 21:34:46.379054   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379376   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379485   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.379392   71893 retry.go:31] will retry after 242.915621ms: waiting for machine to come up
	I0311 21:34:46.623729   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624348   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624375   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.624304   71893 retry.go:31] will retry after 274.237436ms: waiting for machine to come up
	I0311 21:34:46.899864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.900296   71893 retry.go:31] will retry after 333.693752ms: waiting for machine to come up
	I0311 21:34:47.235751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236278   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236309   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.236220   71893 retry.go:31] will retry after 513.728994ms: waiting for machine to come up
	I0311 21:34:47.752081   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752622   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.752553   71893 retry.go:31] will retry after 575.202217ms: waiting for machine to come up
	I0311 21:34:48.329095   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329524   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329557   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:48.329477   71893 retry.go:31] will retry after 741.05703ms: waiting for machine to come up
	I0311 21:34:49.072641   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073163   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073195   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.073101   71893 retry.go:31] will retry after 802.911807ms: waiting for machine to come up
	I0311 21:34:45.528876   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.530391   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:49.530451   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:46.625154   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:46.627732   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628080   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:46.628102   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628304   70908 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:46.633367   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:46.649537   70908 kubeadm.go:877] updating cluster {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:46.649677   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:34:46.649733   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:46.699194   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:46.699264   70908 ssh_runner.go:195] Run: which lz4
	I0311 21:34:46.703944   70908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:46.709224   70908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:46.709258   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:34:48.747926   70908 crio.go:444] duration metric: took 2.044006932s to copy over tarball
	I0311 21:34:48.747994   70908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:49.629334   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:51.122454   70604 pod_ready.go:92] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:51.122481   70604 pod_ready.go:81] duration metric: took 11.006878828s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:51.122494   70604 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.227971   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.228001   70604 pod_ready.go:81] duration metric: took 1.105498501s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.228014   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234804   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.234834   70604 pod_ready.go:81] duration metric: took 6.811865ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234854   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241448   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.241473   70604 pod_ready.go:81] duration metric: took 6.611927ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241486   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249614   70604 pod_ready.go:92] pod "kube-proxy-n2qzt" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.249648   70604 pod_ready.go:81] duration metric: took 8.154372ms for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249661   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139924   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:53.139951   70604 pod_ready.go:81] duration metric: took 890.27792ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139961   70604 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:49.877965   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878438   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878460   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.878397   71893 retry.go:31] will retry after 1.163030899s: waiting for machine to come up
	I0311 21:34:51.042660   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043181   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043210   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:51.043131   71893 retry.go:31] will retry after 1.225509553s: waiting for machine to come up
	I0311 21:34:52.269779   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270321   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270358   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:52.270250   71893 retry.go:31] will retry after 2.091046831s: waiting for machine to come up
	I0311 21:34:54.363231   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363664   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363693   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:54.363618   71893 retry.go:31] will retry after 1.759309864s: waiting for machine to come up
	I0311 21:34:52.031032   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:54.529537   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:52.300295   70908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55227284s)
	I0311 21:34:52.300322   70908 crio.go:451] duration metric: took 3.552370125s to extract the tarball
	I0311 21:34:52.300331   70908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:52.349405   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:52.395791   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:52.395821   70908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:52.395892   70908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.395955   70908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.396002   70908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.396010   70908 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.395959   70908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.395932   70908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.395921   70908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:34:52.395974   70908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397721   70908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.397760   70908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.397767   70908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.397768   70908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.397762   70908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397804   70908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.398008   70908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:34:52.398129   70908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.548255   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.549300   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.560293   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.564094   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.564433   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.569516   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.578251   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:34:52.674385   70908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:34:52.674427   70908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.674475   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.725602   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.741797   70908 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:34:52.741840   70908 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.741882   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.793195   70908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:34:52.793239   70908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.793278   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798118   70908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:34:52.798174   70908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.798220   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798241   70908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:34:52.798277   70908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.798312   70908 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:34:52.798333   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798285   70908 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:34:52.798378   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.798399   70908 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.798434   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798336   70908 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:34:52.798510   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.957658   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.957712   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.957765   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.957816   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.957846   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:34:52.957904   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:34:52.957925   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:53.106649   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:34:53.106699   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:34:53.106913   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:34:53.107837   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:34:53.116024   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:34:53.122060   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:34:53.122118   70908 cache_images.go:92] duration metric: took 726.282306ms to LoadCachedImages
	W0311 21:34:53.122205   70908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0311 21:34:53.122224   70908 kubeadm.go:928] updating node { 192.168.72.52 8443 v1.20.0 crio true true} ...
	I0311 21:34:53.122341   70908 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239315 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:53.122443   70908 ssh_runner.go:195] Run: crio config
	I0311 21:34:53.192161   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:34:53.192191   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:53.192211   70908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:53.192233   70908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239315 NodeName:old-k8s-version-239315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:34:53.192405   70908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239315"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:53.192476   70908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:34:53.203965   70908 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:53.204019   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:53.215221   70908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0311 21:34:53.235943   70908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:53.255383   70908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0311 21:34:53.276634   70908 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:53.281778   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:53.298479   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:53.450052   70908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:53.472459   70908 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315 for IP: 192.168.72.52
	I0311 21:34:53.472480   70908 certs.go:194] generating shared ca certs ...
	I0311 21:34:53.472524   70908 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:53.472676   70908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:53.472728   70908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:53.472771   70908 certs.go:256] generating profile certs ...
	I0311 21:34:53.472883   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key
	I0311 21:34:53.472954   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1
	I0311 21:34:53.473013   70908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key
	I0311 21:34:53.473143   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:53.473185   70908 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:53.473198   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:53.473237   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:53.473272   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:53.473307   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:53.473363   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:53.473988   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:53.527429   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:53.575908   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:53.622438   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:53.665366   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 21:34:53.702121   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0311 21:34:53.746066   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:53.779151   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:53.813286   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:53.847058   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:53.882261   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:53.912444   70908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:53.932592   70908 ssh_runner.go:195] Run: openssl version
	I0311 21:34:53.939200   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:53.955630   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960866   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960920   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.967258   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:53.981075   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:53.995065   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000196   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000272   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.008574   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:54.022782   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:54.037409   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042893   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042965   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.049497   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:54.062597   70908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:54.067971   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:54.074746   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:54.081323   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:54.088762   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:54.095529   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:54.102396   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:54.109553   70908 kubeadm.go:391] StartCluster: {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:54.109639   70908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:54.109689   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.152063   70908 cri.go:89] found id: ""
	I0311 21:34:54.152143   70908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:54.163988   70908 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:54.164005   70908 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:54.164011   70908 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:54.164050   70908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:54.175616   70908 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:54.176779   70908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:54.177542   70908 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-11004/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239315" cluster setting kubeconfig missing "old-k8s-version-239315" context setting]
	I0311 21:34:54.178649   70908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:54.180405   70908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:54.191864   70908 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.52
	I0311 21:34:54.191891   70908 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:54.191903   70908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:54.191948   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.233779   70908 cri.go:89] found id: ""
	I0311 21:34:54.233852   70908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:54.253672   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:54.266010   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:54.266038   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:54.266085   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:54.277867   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:54.277918   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:54.288984   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:54.300133   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:54.300197   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:54.312090   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.323997   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:54.324059   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.337225   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:54.348223   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:54.348266   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:54.359245   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:54.370003   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:54.525972   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.408437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.676995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.819933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.913736   70908 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:55.913811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:55.147500   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:57.148276   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.124678   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125150   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125183   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:56.125101   71893 retry.go:31] will retry after 2.284226205s: waiting for machine to come up
	I0311 21:34:58.412391   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.412973   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.413002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:58.412923   71893 retry.go:31] will retry after 4.532871869s: waiting for machine to come up
	I0311 21:34:57.031683   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:59.032261   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:56.914753   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.413928   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.914123   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.413931   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.914199   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.414205   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.913880   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.914121   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.148774   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.646997   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:03.647990   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:02.948316   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948762   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948790   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:35:02.948704   71893 retry.go:31] will retry after 4.885152649s: waiting for machine to come up
	I0311 21:35:01.529589   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:04.028860   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.414003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:01.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.913977   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.414740   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.914735   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.414726   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.914846   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.914715   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.648516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:08.147744   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:07.835002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.835551   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Found IP for machine: 192.168.61.11
	I0311 21:35:07.835585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserving static IP address...
	I0311 21:35:07.835601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has current primary IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.836026   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.836055   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | skip adding static IP to network mk-default-k8s-diff-port-766430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"}
	I0311 21:35:07.836075   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserved static IP address: 192.168.61.11
	I0311 21:35:07.836110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Getting to WaitForSSH function...
	I0311 21:35:07.836125   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for SSH to be available...
	I0311 21:35:07.838230   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.838631   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838757   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH client type: external
	I0311 21:35:07.838784   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa (-rw-------)
	I0311 21:35:07.838830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:35:07.838871   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | About to run SSH command:
	I0311 21:35:07.838897   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | exit 0
	I0311 21:35:07.968765   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | SSH cmd err, output: <nil>: 
	I0311 21:35:07.969119   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetConfigRaw
	I0311 21:35:07.969756   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:07.972490   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.972921   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.972949   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.973180   70417 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/config.json ...
	I0311 21:35:07.973362   70417 machine.go:94] provisionDockerMachine start ...
	I0311 21:35:07.973381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:07.973582   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:07.975926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.976277   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976419   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:07.976566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976704   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976847   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:07.976991   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:07.977161   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:07.977171   70417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:35:08.093841   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:35:08.093864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094076   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:35:08.094100   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094329   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.097134   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097498   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.097528   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097670   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.097854   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098021   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.098409   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.098642   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.098657   70417 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766430 && echo "default-k8s-diff-port-766430" | sudo tee /etc/hostname
	I0311 21:35:08.233860   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766430
	
	I0311 21:35:08.233890   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.236977   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237387   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.237408   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237596   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.237791   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.237962   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.238194   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.238359   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.238515   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.238532   70417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:35:08.363393   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:35:08.363419   70417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:35:08.363471   70417 buildroot.go:174] setting up certificates
	I0311 21:35:08.363484   70417 provision.go:84] configureAuth start
	I0311 21:35:08.363497   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.363780   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:08.366605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.366990   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.367012   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.367139   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.369314   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369650   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.369676   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369798   70417 provision.go:143] copyHostCerts
	I0311 21:35:08.369853   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:35:08.369863   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:35:08.369915   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:35:08.370005   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:35:08.370013   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:35:08.370032   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:35:08.370091   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:35:08.370098   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:35:08.370114   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:35:08.370169   70417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766430 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-766430 localhost minikube]
	I0311 21:35:08.542469   70417 provision.go:177] copyRemoteCerts
	I0311 21:35:08.542529   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:35:08.542550   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.545388   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545750   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.545782   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545958   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.546115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.546264   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.546360   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:08.635866   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:35:08.667490   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0311 21:35:08.697944   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:35:08.726836   70417 provision.go:87] duration metric: took 363.34159ms to configureAuth
	I0311 21:35:08.726860   70417 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:35:08.727033   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:35:08.727115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.730050   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730458   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.730489   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730788   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.730987   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731170   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731317   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.731466   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.731607   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.731629   70417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:35:09.035100   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:35:09.035129   70417 machine.go:97] duration metric: took 1.061753229s to provisionDockerMachine
	I0311 21:35:09.035142   70417 start.go:293] postStartSetup for "default-k8s-diff-port-766430" (driver="kvm2")
	I0311 21:35:09.035151   70417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:35:09.035165   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.035458   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:35:09.035484   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.038340   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038638   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.038668   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038829   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.039027   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.039178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.039343   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.133013   70417 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:35:09.138043   70417 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:35:09.138065   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:35:09.138166   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:35:09.138259   70417 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:35:09.138364   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:35:09.149527   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:09.176424   70417 start.go:296] duration metric: took 141.271199ms for postStartSetup
	I0311 21:35:09.176460   70417 fix.go:56] duration metric: took 24.15021813s for fixHost
	I0311 21:35:09.176479   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.179447   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.179830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.179859   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.180147   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.180402   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180758   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.180974   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:09.181186   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:09.181200   70417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:35:09.297740   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192909.282566583
	
	I0311 21:35:09.297764   70417 fix.go:216] guest clock: 1710192909.282566583
	I0311 21:35:09.297773   70417 fix.go:229] Guest: 2024-03-11 21:35:09.282566583 +0000 UTC Remote: 2024-03-11 21:35:09.176465496 +0000 UTC m=+364.839103648 (delta=106.101087ms)
	I0311 21:35:09.297795   70417 fix.go:200] guest clock delta is within tolerance: 106.101087ms
	I0311 21:35:09.297802   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 24.271590337s
	I0311 21:35:09.297825   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.298067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:09.300989   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301399   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.301422   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301604   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302091   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302291   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302385   70417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:35:09.302433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.302490   70417 ssh_runner.go:195] Run: cat /version.json
	I0311 21:35:09.302515   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.305403   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305572   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305802   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.305831   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305912   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306042   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.306223   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306430   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.306511   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306645   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306772   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:06.528726   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.029055   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.419852   70417 ssh_runner.go:195] Run: systemctl --version
	I0311 21:35:09.427141   70417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:35:09.579321   70417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:35:09.586396   70417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:35:09.586470   70417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:35:09.606617   70417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:35:09.606639   70417 start.go:494] detecting cgroup driver to use...
	I0311 21:35:09.606705   70417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:35:09.627066   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:35:09.646091   70417 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:35:09.646151   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:35:09.662307   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:35:09.679793   70417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:35:09.828827   70417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:35:09.984773   70417 docker.go:233] disabling docker service ...
	I0311 21:35:09.984843   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:35:10.003968   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:35:10.018609   70417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:35:10.174297   70417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:35:10.316762   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:35:10.338008   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:35:10.359320   70417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:35:10.359374   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.371953   70417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:35:10.372008   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.384823   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.397305   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.409521   70417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:35:10.424714   70417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:35:10.438470   70417 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:35:10.438529   70417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:35:10.454436   70417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:35:10.465004   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:10.611379   70417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:35:10.786860   70417 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:35:10.786959   70417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:35:10.792496   70417 start.go:562] Will wait 60s for crictl version
	I0311 21:35:10.792551   70417 ssh_runner.go:195] Run: which crictl
	I0311 21:35:10.797079   70417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:35:10.837010   70417 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:35:10.837086   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.868308   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.900087   70417 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:35:06.414389   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:06.914233   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.414565   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.914773   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.414348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.914743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.413987   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.914698   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.150688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:12.648444   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:10.901304   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:10.904103   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904380   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:10.904407   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904557   70417 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0311 21:35:10.909585   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:10.924163   70417 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:35:10.924311   70417 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:35:10.924408   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:10.969555   70417 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:35:10.969623   70417 ssh_runner.go:195] Run: which lz4
	I0311 21:35:10.974054   70417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:35:10.978776   70417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:35:10.978811   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:35:12.893346   70417 crio.go:444] duration metric: took 1.91931676s to copy over tarball
	I0311 21:35:12.893421   70417 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:35:11.031301   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:13.527896   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:11.414320   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:11.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.414529   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.914476   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.414282   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.914426   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.414521   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.914001   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.414839   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.913921   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.648625   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.148688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:15.772070   70417 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.878627154s)
	I0311 21:35:15.772094   70417 crio.go:451] duration metric: took 2.878719213s to extract the tarball
	I0311 21:35:15.772101   70417 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:35:15.818581   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:15.872635   70417 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:35:15.872658   70417 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:35:15.872667   70417 kubeadm.go:928] updating node { 192.168.61.11 8444 v1.28.4 crio true true} ...
	I0311 21:35:15.872823   70417 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-766430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:35:15.872933   70417 ssh_runner.go:195] Run: crio config
	I0311 21:35:15.928776   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:15.928803   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:15.928818   70417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:35:15.928843   70417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766430 NodeName:default-k8s-diff-port-766430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:35:15.929018   70417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:35:15.929090   70417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:35:15.941853   70417 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:35:15.941908   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:35:15.954936   70417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0311 21:35:15.975236   70417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:35:15.994509   70417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0311 21:35:16.014058   70417 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0311 21:35:16.018972   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:16.035169   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:16.160453   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:35:16.182252   70417 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430 for IP: 192.168.61.11
	I0311 21:35:16.182272   70417 certs.go:194] generating shared ca certs ...
	I0311 21:35:16.182286   70417 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:35:16.182419   70417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:35:16.182465   70417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:35:16.182475   70417 certs.go:256] generating profile certs ...
	I0311 21:35:16.182545   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/client.key
	I0311 21:35:16.182601   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key.2c00376c
	I0311 21:35:16.182635   70417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key
	I0311 21:35:16.182754   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:35:16.182783   70417 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:35:16.182789   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:35:16.182823   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:35:16.182844   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:35:16.182867   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:35:16.182901   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:16.183517   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:35:16.231409   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:35:16.277004   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:35:16.315346   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:35:16.352697   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0311 21:35:16.388570   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:35:16.422830   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:35:16.452562   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 21:35:16.480976   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:35:16.507149   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:35:16.535832   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:35:16.566697   70417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:35:16.587454   70417 ssh_runner.go:195] Run: openssl version
	I0311 21:35:16.593880   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:35:16.608197   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613604   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613673   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.620156   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:35:16.632634   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:35:16.646047   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652530   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652591   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.660480   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:35:16.673572   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:35:16.687161   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692589   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692632   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.705471   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:35:16.718251   70417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:35:16.723979   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:35:16.731335   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:35:16.738485   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:35:16.745489   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:35:16.752295   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:35:16.759251   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:35:16.766128   70417 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:35:16.766237   70417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:35:16.766292   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.806418   70417 cri.go:89] found id: ""
	I0311 21:35:16.806478   70417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:35:16.821434   70417 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:35:16.821455   70417 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:35:16.821462   70417 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:35:16.821514   70417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:35:16.835457   70417 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:35:16.836764   70417 kubeconfig.go:125] found "default-k8s-diff-port-766430" server: "https://192.168.61.11:8444"
	I0311 21:35:16.839163   70417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:35:16.850037   70417 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.11
	I0311 21:35:16.850065   70417 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:35:16.850074   70417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:35:16.850117   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.895532   70417 cri.go:89] found id: ""
	I0311 21:35:16.895612   70417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:35:16.913151   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:35:16.927989   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:35:16.928014   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:35:16.928073   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:35:16.939803   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:35:16.939849   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:35:16.950103   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:35:16.960164   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:35:16.960213   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:35:16.970349   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.980056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:35:16.980098   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.990189   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:35:16.999799   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:35:16.999874   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:35:17.010502   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:35:17.021106   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:17.136170   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.044684   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.296278   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.376702   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.473740   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:35:18.473840   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.974894   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.529099   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.755777   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:20.028341   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:16.414018   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:16.914685   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.414894   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.914319   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.414875   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.914338   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.414496   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.914396   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.414731   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.914149   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.648967   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:22.148024   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:19.474609   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.499907   70417 api_server.go:72] duration metric: took 1.026169594s to wait for apiserver process to appear ...
	I0311 21:35:19.499931   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:35:19.499951   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:19.500566   70417 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0311 21:35:20.000807   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.693958   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:35:22.693991   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:35:22.694006   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.772747   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:22.772792   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.000004   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.004763   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.004805   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.500112   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.507209   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.507236   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.000861   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.006793   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:24.006830   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.500264   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.508242   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:35:24.520230   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:35:24.520255   70417 api_server.go:131] duration metric: took 5.020318338s to wait for apiserver health ...
	I0311 21:35:24.520285   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:24.520291   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:24.522151   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:35:22.029963   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.530052   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:21.414126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:21.914012   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.414680   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.414478   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.914770   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.414370   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.914772   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.413991   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.914516   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.149179   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.647134   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.647725   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.523964   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:35:24.538536   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:35:24.583279   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:35:24.594703   70417 system_pods.go:59] 8 kube-system pods found
	I0311 21:35:24.594730   70417 system_pods.go:61] "coredns-5dd5756b68-pkn9d" [ee4de3f7-1044-4dc9-91dc-d9b23493b0bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:35:24.594737   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [96b9327c-f97d-463f-9d1e-3210b4032aab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:35:24.594751   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [fc650f48-2e28-4219-8571-8b6c43891eb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:35:24.594763   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [c7cc5d40-ad56-4132-ab81-3422ffe1d5b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:35:24.594772   70417 system_pods.go:61] "kube-proxy-cggzr" [f6b7fe4e-7d57-4604-b63d-f9890826b659] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:35:24.594784   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [8a156fec-b2f3-46e8-bf0d-0bf291ef8783] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:35:24.594795   70417 system_pods.go:61] "metrics-server-57f55c9bc5-kxl6n" [ac62700b-a39a-480e-841e-852bf3c66e7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:35:24.594805   70417 system_pods.go:61] "storage-provisioner" [a0b03582-0d90-4a7f-919c-0552046edcb5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:35:24.594821   70417 system_pods.go:74] duration metric: took 11.523907ms to wait for pod list to return data ...
	I0311 21:35:24.594830   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:35:24.606500   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:35:24.606529   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:35:24.606546   70417 node_conditions.go:105] duration metric: took 11.711241ms to run NodePressure ...
	I0311 21:35:24.606565   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:24.893361   70417 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899200   70417 kubeadm.go:733] kubelet initialised
	I0311 21:35:24.899225   70417 kubeadm.go:734] duration metric: took 5.837351ms waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899235   70417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:35:24.905858   70417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:26.912640   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.916566   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:27.029381   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:29.529565   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.414267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:26.914876   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.414469   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.914513   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.414924   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.914126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.414526   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.914039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.414305   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.914438   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.147527   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:33.147694   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.413246   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.912878   70417 pod_ready.go:92] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:31.912899   70417 pod_ready.go:81] duration metric: took 7.007017714s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:31.912908   70417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:33.977091   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:32.029295   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:34.529021   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.414610   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.914472   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.414158   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.914169   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.414745   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.914820   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.414071   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.914228   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.414135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.914695   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.148058   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:37.648200   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.422565   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.921304   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.921328   70417 pod_ready.go:81] duration metric: took 5.008411943s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.921340   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927268   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.927284   70417 pod_ready.go:81] duration metric: took 5.936969ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927292   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932540   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.932563   70417 pod_ready.go:81] duration metric: took 5.264737ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932575   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937456   70417 pod_ready.go:92] pod "kube-proxy-cggzr" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.937473   70417 pod_ready.go:81] duration metric: took 4.892276ms for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937480   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942372   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.942390   70417 pod_ready.go:81] duration metric: took 4.902792ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942401   70417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:38.949452   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.531316   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:39.030491   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.414435   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:36.914157   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.414539   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.914811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.414070   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.914303   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.413935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.914135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.414569   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.914106   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.147355   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.148353   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:40.950204   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.950335   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.528874   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:43.530140   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.414404   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:41.914323   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.414215   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.914566   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.414671   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.914658   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.414703   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.913966   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.414045   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.914260   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.648282   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.148247   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:45.449963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.451576   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.029164   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:48.529137   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:46.914821   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.414210   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.914008   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.413884   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.914160   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.414877   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.914379   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.414293   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.913867   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.148585   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.648372   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:49.949667   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.950874   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.953067   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:50.529616   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.030586   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.414582   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:51.914453   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.414668   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.914816   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.414768   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.914592   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.414743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.914307   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.414000   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.914553   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:55.914636   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:55.957434   70908 cri.go:89] found id: ""
	I0311 21:35:55.957459   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.957470   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:55.957477   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:55.957545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:55.995255   70908 cri.go:89] found id: ""
	I0311 21:35:55.995279   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.995290   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:55.995305   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:55.995364   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:56.038893   70908 cri.go:89] found id: ""
	I0311 21:35:56.038916   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.038926   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:56.038933   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:56.038990   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:54.147165   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.148641   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.647841   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.451057   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.950421   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:55.528922   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.029209   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:00.029912   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.081497   70908 cri.go:89] found id: ""
	I0311 21:35:56.081517   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.081528   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:56.081534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:56.081591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:56.120047   70908 cri.go:89] found id: ""
	I0311 21:35:56.120071   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.120079   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:56.120084   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:56.120156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:56.157350   70908 cri.go:89] found id: ""
	I0311 21:35:56.157370   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.157377   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:56.157382   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:56.157433   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:56.198324   70908 cri.go:89] found id: ""
	I0311 21:35:56.198354   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.198374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:56.198381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:56.198437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:56.236579   70908 cri.go:89] found id: ""
	I0311 21:35:56.236608   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.236619   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:56.236691   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:56.236712   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:56.377789   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:35:56.377809   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:56.377825   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:56.449765   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:56.449807   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:56.502417   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:56.502448   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:56.557205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:56.557241   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.073411   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:59.088205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:59.088287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:59.126458   70908 cri.go:89] found id: ""
	I0311 21:35:59.126486   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.126494   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:59.126499   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:59.126555   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:59.197887   70908 cri.go:89] found id: ""
	I0311 21:35:59.197911   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.197919   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:59.197924   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:59.197967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:59.239523   70908 cri.go:89] found id: ""
	I0311 21:35:59.239552   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.239562   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:59.239570   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:59.239642   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:59.280903   70908 cri.go:89] found id: ""
	I0311 21:35:59.280930   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.280940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:59.280947   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:59.281024   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:59.320218   70908 cri.go:89] found id: ""
	I0311 21:35:59.320242   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.320254   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:59.320260   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:59.320314   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:59.361235   70908 cri.go:89] found id: ""
	I0311 21:35:59.361265   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.361276   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:59.361283   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:59.361352   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:59.409477   70908 cri.go:89] found id: ""
	I0311 21:35:59.409503   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.409514   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:59.409522   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:59.409568   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:59.454704   70908 cri.go:89] found id: ""
	I0311 21:35:59.454728   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.454739   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:59.454748   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:59.454767   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:59.525839   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:59.525864   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:59.569577   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:59.569606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:59.628402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:59.628437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.647181   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:59.647208   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:59.731300   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:00.650515   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.146560   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:01.449702   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.950341   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.030569   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:04.529453   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.232458   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:02.246948   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:02.247025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:02.290561   70908 cri.go:89] found id: ""
	I0311 21:36:02.290588   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.290599   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:02.290605   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:02.290659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:02.333788   70908 cri.go:89] found id: ""
	I0311 21:36:02.333814   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.333821   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:02.333826   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:02.333877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:02.375774   70908 cri.go:89] found id: ""
	I0311 21:36:02.375798   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.375806   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:02.375812   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:02.375862   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:02.414741   70908 cri.go:89] found id: ""
	I0311 21:36:02.414781   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.414803   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:02.414810   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:02.414875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:02.456637   70908 cri.go:89] found id: ""
	I0311 21:36:02.456660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.456670   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:02.456677   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:02.456759   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:02.494633   70908 cri.go:89] found id: ""
	I0311 21:36:02.494660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.494670   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:02.494678   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:02.494738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:02.536187   70908 cri.go:89] found id: ""
	I0311 21:36:02.536212   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.536223   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:02.536230   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:02.536291   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:02.574933   70908 cri.go:89] found id: ""
	I0311 21:36:02.574962   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.574973   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:02.574985   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:02.575001   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:02.656610   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:02.656637   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:02.656653   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:02.730514   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:02.730548   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:02.776009   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:02.776041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:02.829792   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:02.829826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.345568   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:05.360082   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:05.360164   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:05.406106   70908 cri.go:89] found id: ""
	I0311 21:36:05.406131   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.406141   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:05.406147   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:05.406203   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:05.449584   70908 cri.go:89] found id: ""
	I0311 21:36:05.449608   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.449617   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:05.449624   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:05.449680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:05.493869   70908 cri.go:89] found id: ""
	I0311 21:36:05.493898   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.493912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:05.493928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:05.493994   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:05.563506   70908 cri.go:89] found id: ""
	I0311 21:36:05.563532   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.563542   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:05.563549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:05.563600   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:05.630140   70908 cri.go:89] found id: ""
	I0311 21:36:05.630165   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.630172   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:05.630177   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:05.630230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:05.675584   70908 cri.go:89] found id: ""
	I0311 21:36:05.675612   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.675623   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:05.675631   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:05.675689   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:05.720521   70908 cri.go:89] found id: ""
	I0311 21:36:05.720548   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.720557   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:05.720563   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:05.720615   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:05.759323   70908 cri.go:89] found id: ""
	I0311 21:36:05.759351   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.759359   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:05.759367   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:05.759379   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:05.801024   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:05.801050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:05.856330   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:05.856356   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.871299   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:05.871324   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:05.950218   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:05.950245   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:05.950259   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:05.148227   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.647389   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:05.950833   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.449548   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.028964   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:09.029396   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.535502   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:08.552152   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:08.552220   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:08.596602   70908 cri.go:89] found id: ""
	I0311 21:36:08.596707   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.596731   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:08.596755   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:08.596820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:08.641091   70908 cri.go:89] found id: ""
	I0311 21:36:08.641119   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.641130   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:08.641137   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:08.641198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:08.684466   70908 cri.go:89] found id: ""
	I0311 21:36:08.684494   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.684503   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:08.684510   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:08.684570   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:08.730899   70908 cri.go:89] found id: ""
	I0311 21:36:08.730924   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.730931   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:08.730937   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:08.730997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:08.775293   70908 cri.go:89] found id: ""
	I0311 21:36:08.775317   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.775324   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:08.775330   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:08.775387   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:08.816098   70908 cri.go:89] found id: ""
	I0311 21:36:08.816126   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.816137   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:08.816144   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:08.816207   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:08.857413   70908 cri.go:89] found id: ""
	I0311 21:36:08.857449   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.857460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:08.857476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:08.857541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:08.898252   70908 cri.go:89] found id: ""
	I0311 21:36:08.898283   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.898293   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:08.898302   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:08.898313   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:08.955162   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:08.955188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:08.970234   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:08.970258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:09.055025   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:09.055043   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:09.055055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:09.140345   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:09.140376   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:10.148323   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.647037   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:10.450796   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.450839   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.529842   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.029706   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.681542   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:11.697407   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:11.697481   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:11.740239   70908 cri.go:89] found id: ""
	I0311 21:36:11.740264   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.740274   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:11.740280   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:11.740336   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:11.777625   70908 cri.go:89] found id: ""
	I0311 21:36:11.777655   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.777667   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:11.777674   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:11.777745   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:11.817202   70908 cri.go:89] found id: ""
	I0311 21:36:11.817226   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.817233   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:11.817239   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:11.817306   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:11.858912   70908 cri.go:89] found id: ""
	I0311 21:36:11.858933   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.858940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:11.858945   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:11.858998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:11.897841   70908 cri.go:89] found id: ""
	I0311 21:36:11.897876   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.897887   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:11.897895   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:11.897955   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:11.936181   70908 cri.go:89] found id: ""
	I0311 21:36:11.936207   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.936218   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:11.936226   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:11.936293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:11.981882   70908 cri.go:89] found id: ""
	I0311 21:36:11.981905   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.981915   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:11.981922   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:11.981982   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:12.022270   70908 cri.go:89] found id: ""
	I0311 21:36:12.022298   70908 logs.go:276] 0 containers: []
	W0311 21:36:12.022309   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:12.022320   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:12.022333   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:12.074640   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:12.074668   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:12.089854   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:12.089879   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:12.179578   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:12.179595   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:12.179606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:12.263249   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:12.263285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:14.811547   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:14.827075   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:14.827175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:14.870512   70908 cri.go:89] found id: ""
	I0311 21:36:14.870544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.870555   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:14.870563   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:14.870625   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:14.908521   70908 cri.go:89] found id: ""
	I0311 21:36:14.908544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.908553   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:14.908558   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:14.908607   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:14.951702   70908 cri.go:89] found id: ""
	I0311 21:36:14.951729   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.951739   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:14.951746   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:14.951805   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:14.992590   70908 cri.go:89] found id: ""
	I0311 21:36:14.992618   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.992630   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:14.992638   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:14.992698   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:15.034535   70908 cri.go:89] found id: ""
	I0311 21:36:15.034556   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.034563   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:15.034569   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:15.034614   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:15.077175   70908 cri.go:89] found id: ""
	I0311 21:36:15.077200   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.077210   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:15.077218   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:15.077283   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:15.121500   70908 cri.go:89] found id: ""
	I0311 21:36:15.121530   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.121541   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:15.121549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:15.121655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:15.162712   70908 cri.go:89] found id: ""
	I0311 21:36:15.162738   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.162748   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:15.162757   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:15.162776   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:15.241469   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:15.241488   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:15.241499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:15.322257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:15.322291   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:15.368258   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:15.368285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:15.427131   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:15.427163   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:14.648776   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.148710   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.452948   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.949085   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.950111   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.030409   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.529122   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.944348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:17.958629   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:17.958704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:17.995869   70908 cri.go:89] found id: ""
	I0311 21:36:17.995895   70908 logs.go:276] 0 containers: []
	W0311 21:36:17.995904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:17.995914   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:17.995976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:18.032273   70908 cri.go:89] found id: ""
	I0311 21:36:18.032300   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.032308   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:18.032313   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:18.032361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:18.072497   70908 cri.go:89] found id: ""
	I0311 21:36:18.072519   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.072526   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:18.072532   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:18.072578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:18.110091   70908 cri.go:89] found id: ""
	I0311 21:36:18.110119   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.110129   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:18.110136   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:18.110199   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:18.152217   70908 cri.go:89] found id: ""
	I0311 21:36:18.152261   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.152272   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:18.152280   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:18.152347   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:18.193957   70908 cri.go:89] found id: ""
	I0311 21:36:18.193989   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.194000   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:18.194008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:18.194086   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:18.231828   70908 cri.go:89] found id: ""
	I0311 21:36:18.231861   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.231873   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:18.231880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:18.231939   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:18.271862   70908 cri.go:89] found id: ""
	I0311 21:36:18.271896   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.271907   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:18.271917   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:18.271933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:18.325405   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:18.325440   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:18.344560   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:18.344593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:18.425051   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:18.425075   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:18.425093   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:18.513247   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:18.513287   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:19.646758   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.647702   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.649318   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.450692   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.950088   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.028812   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.029828   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.060499   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:21.076648   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:21.076716   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:21.117270   70908 cri.go:89] found id: ""
	I0311 21:36:21.117298   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.117309   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:21.117317   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:21.117388   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:21.159005   70908 cri.go:89] found id: ""
	I0311 21:36:21.159045   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.159056   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:21.159063   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:21.159122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:21.196576   70908 cri.go:89] found id: ""
	I0311 21:36:21.196599   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.196609   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:21.196617   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:21.196677   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:21.237689   70908 cri.go:89] found id: ""
	I0311 21:36:21.237718   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.237729   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:21.237734   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:21.237783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:21.280662   70908 cri.go:89] found id: ""
	I0311 21:36:21.280696   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.280707   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:21.280714   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:21.280795   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:21.321475   70908 cri.go:89] found id: ""
	I0311 21:36:21.321501   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.321511   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:21.321518   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:21.321581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:21.365186   70908 cri.go:89] found id: ""
	I0311 21:36:21.365209   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.365216   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:21.365221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:21.365276   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:21.408678   70908 cri.go:89] found id: ""
	I0311 21:36:21.408713   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.408725   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:21.408754   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:21.408771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:21.466635   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:21.466663   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:21.482596   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:21.482622   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:21.556750   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:21.556769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:21.556780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:21.643095   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:21.643126   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.195112   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:24.208829   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:24.208895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:24.245956   70908 cri.go:89] found id: ""
	I0311 21:36:24.245981   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.245989   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:24.245995   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:24.246053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:24.289740   70908 cri.go:89] found id: ""
	I0311 21:36:24.289766   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.289778   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:24.289784   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:24.289846   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:24.336911   70908 cri.go:89] found id: ""
	I0311 21:36:24.336963   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.336977   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:24.336986   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:24.337057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:24.381715   70908 cri.go:89] found id: ""
	I0311 21:36:24.381739   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.381753   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:24.381761   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:24.381817   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:24.423759   70908 cri.go:89] found id: ""
	I0311 21:36:24.423787   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.423797   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:24.423805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:24.423882   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:24.468903   70908 cri.go:89] found id: ""
	I0311 21:36:24.468931   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.468946   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:24.468954   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:24.469013   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:24.509602   70908 cri.go:89] found id: ""
	I0311 21:36:24.509629   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.509639   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:24.509646   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:24.509706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:24.551483   70908 cri.go:89] found id: ""
	I0311 21:36:24.551511   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.551522   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:24.551532   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:24.551545   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:24.567123   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:24.567154   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:24.644215   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:24.644247   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:24.644262   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:24.726438   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:24.726469   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.779567   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:24.779596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:26.146823   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.148291   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:26.450637   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.949850   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:25.528542   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.529375   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:29.529701   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.337785   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:27.352504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:27.352578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:27.395787   70908 cri.go:89] found id: ""
	I0311 21:36:27.395809   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.395817   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:27.395823   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:27.395869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:27.441800   70908 cri.go:89] found id: ""
	I0311 21:36:27.441826   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.441834   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:27.441839   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:27.441893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:27.481761   70908 cri.go:89] found id: ""
	I0311 21:36:27.481791   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.481802   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:27.481809   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:27.481868   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:27.526981   70908 cri.go:89] found id: ""
	I0311 21:36:27.527011   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.527029   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:27.527037   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:27.527130   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:27.566569   70908 cri.go:89] found id: ""
	I0311 21:36:27.566602   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.566614   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:27.566622   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:27.566682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:27.607434   70908 cri.go:89] found id: ""
	I0311 21:36:27.607456   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.607464   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:27.607469   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:27.607529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:27.652648   70908 cri.go:89] found id: ""
	I0311 21:36:27.652674   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.652681   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:27.652686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:27.652756   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:27.691105   70908 cri.go:89] found id: ""
	I0311 21:36:27.691136   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.691148   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:27.691158   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:27.691173   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:27.706451   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:27.706477   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:27.788935   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:27.788959   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:27.788975   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:27.875721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:27.875758   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:27.927920   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:27.927951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.487728   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:30.503425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:30.503508   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:30.550846   70908 cri.go:89] found id: ""
	I0311 21:36:30.550868   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.550875   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:30.550881   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:30.550928   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:30.586886   70908 cri.go:89] found id: ""
	I0311 21:36:30.586915   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.586925   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:30.586934   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:30.586991   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:30.627849   70908 cri.go:89] found id: ""
	I0311 21:36:30.627884   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.627895   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:30.627902   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:30.627965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:30.669188   70908 cri.go:89] found id: ""
	I0311 21:36:30.669209   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.669216   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:30.669222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:30.669266   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:30.711676   70908 cri.go:89] found id: ""
	I0311 21:36:30.711697   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.711705   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:30.711710   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:30.711758   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:30.754218   70908 cri.go:89] found id: ""
	I0311 21:36:30.754240   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.754248   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:30.754253   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:30.754299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:30.791224   70908 cri.go:89] found id: ""
	I0311 21:36:30.791255   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.791263   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:30.791269   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:30.791328   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:30.831263   70908 cri.go:89] found id: ""
	I0311 21:36:30.831291   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.831301   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:30.831311   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:30.831326   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:30.876574   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:30.876600   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.928483   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:30.928509   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:30.944642   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:30.944665   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:31.026406   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:31.026428   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:31.026444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:30.648859   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.147907   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:30.952483   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.451714   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:32.028484   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:34.028948   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.611104   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:33.625644   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:33.625706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:33.664787   70908 cri.go:89] found id: ""
	I0311 21:36:33.664816   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.664825   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:33.664830   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:33.664894   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:33.704636   70908 cri.go:89] found id: ""
	I0311 21:36:33.704659   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.704666   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:33.704672   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:33.704717   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:33.744797   70908 cri.go:89] found id: ""
	I0311 21:36:33.744837   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.744848   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:33.744855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:33.744917   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:33.787435   70908 cri.go:89] found id: ""
	I0311 21:36:33.787464   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.787474   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:33.787482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:33.787541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:33.826578   70908 cri.go:89] found id: ""
	I0311 21:36:33.826606   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.826617   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:33.826624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:33.826684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:33.864854   70908 cri.go:89] found id: ""
	I0311 21:36:33.864875   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.864882   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:33.864887   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:33.864934   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:33.905366   70908 cri.go:89] found id: ""
	I0311 21:36:33.905397   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.905409   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:33.905416   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:33.905477   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:33.950196   70908 cri.go:89] found id: ""
	I0311 21:36:33.950222   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.950232   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:33.950243   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:33.950258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:34.001016   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:34.001049   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:34.059102   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:34.059131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:34.075879   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:34.075908   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:34.177114   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:34.177138   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:34.177161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:35.647611   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.147941   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:35.950147   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.449090   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.030072   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.527952   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.756459   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:36.772781   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:36.772867   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:36.820076   70908 cri.go:89] found id: ""
	I0311 21:36:36.820103   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.820111   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:36.820118   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:36.820169   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:36.859279   70908 cri.go:89] found id: ""
	I0311 21:36:36.859306   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.859317   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:36.859324   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:36.859383   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:36.899669   70908 cri.go:89] found id: ""
	I0311 21:36:36.899694   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.899705   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:36.899712   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:36.899770   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:36.938826   70908 cri.go:89] found id: ""
	I0311 21:36:36.938853   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.938864   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:36.938872   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:36.938957   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:36.976659   70908 cri.go:89] found id: ""
	I0311 21:36:36.976685   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.976693   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:36.976703   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:36.976772   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:37.015439   70908 cri.go:89] found id: ""
	I0311 21:36:37.015462   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.015469   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:37.015474   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:37.015519   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:37.057469   70908 cri.go:89] found id: ""
	I0311 21:36:37.057496   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.057507   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:37.057514   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:37.057579   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:37.106287   70908 cri.go:89] found id: ""
	I0311 21:36:37.106316   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.106325   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:37.106335   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:37.106352   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:37.122333   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:37.122367   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:37.197708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:37.197731   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:37.197742   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:37.281911   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:37.281944   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:37.335978   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:37.336011   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:39.891583   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:39.914741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:39.914823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:39.955751   70908 cri.go:89] found id: ""
	I0311 21:36:39.955773   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.955781   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:39.955786   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:39.955837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:39.997604   70908 cri.go:89] found id: ""
	I0311 21:36:39.997632   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.997642   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:39.997649   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:39.997711   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:40.039138   70908 cri.go:89] found id: ""
	I0311 21:36:40.039168   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.039178   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:40.039186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:40.039230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:40.079906   70908 cri.go:89] found id: ""
	I0311 21:36:40.079934   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.079945   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:40.079952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:40.080017   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:40.124116   70908 cri.go:89] found id: ""
	I0311 21:36:40.124141   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.124152   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:40.124159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:40.124221   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:40.165078   70908 cri.go:89] found id: ""
	I0311 21:36:40.165099   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.165108   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:40.165113   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:40.165158   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:40.203928   70908 cri.go:89] found id: ""
	I0311 21:36:40.203954   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.203962   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:40.203971   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:40.204018   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:40.244755   70908 cri.go:89] found id: ""
	I0311 21:36:40.244783   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.244793   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:40.244803   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:40.244819   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:40.302090   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:40.302125   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:40.318071   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:40.318097   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:40.405336   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:40.405363   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:40.405378   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:40.493262   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:40.493298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:40.148095   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.651483   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.449200   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.450259   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.528526   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.533619   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:45.029285   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:43.052419   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:43.068300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:43.068378   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:43.109665   70908 cri.go:89] found id: ""
	I0311 21:36:43.109701   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.109717   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:43.109725   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:43.109789   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:43.152233   70908 cri.go:89] found id: ""
	I0311 21:36:43.152253   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.152260   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:43.152265   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:43.152311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:43.194969   70908 cri.go:89] found id: ""
	I0311 21:36:43.194995   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.195002   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:43.195008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:43.195056   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:43.234555   70908 cri.go:89] found id: ""
	I0311 21:36:43.234581   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.234592   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:43.234597   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:43.234651   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:43.275188   70908 cri.go:89] found id: ""
	I0311 21:36:43.275214   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.275224   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:43.275232   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:43.275287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:43.314481   70908 cri.go:89] found id: ""
	I0311 21:36:43.314507   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.314515   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:43.314521   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:43.314580   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:43.353287   70908 cri.go:89] found id: ""
	I0311 21:36:43.353317   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.353328   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:43.353336   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:43.353395   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:43.396112   70908 cri.go:89] found id: ""
	I0311 21:36:43.396138   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.396150   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:43.396160   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:43.396175   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:43.456116   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:43.456143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:43.472992   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:43.473023   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:43.558281   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:43.558311   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:43.558327   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:43.641849   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:43.641885   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:45.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.147574   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:44.954864   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.450806   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.029669   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.529505   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:46.187444   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:46.202848   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:46.202911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:46.244843   70908 cri.go:89] found id: ""
	I0311 21:36:46.244872   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.244880   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:46.244886   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:46.244933   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:46.297789   70908 cri.go:89] found id: ""
	I0311 21:36:46.297820   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.297831   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:46.297838   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:46.297903   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:46.353104   70908 cri.go:89] found id: ""
	I0311 21:36:46.353127   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.353134   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:46.353140   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:46.353211   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:46.426767   70908 cri.go:89] found id: ""
	I0311 21:36:46.426792   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.426799   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:46.426804   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:46.426858   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:46.469850   70908 cri.go:89] found id: ""
	I0311 21:36:46.469881   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.469891   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:46.469899   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:46.469960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:46.510692   70908 cri.go:89] found id: ""
	I0311 21:36:46.510718   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.510726   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:46.510732   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:46.510787   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:46.554445   70908 cri.go:89] found id: ""
	I0311 21:36:46.554468   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.554475   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:46.554482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:46.554527   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:46.592417   70908 cri.go:89] found id: ""
	I0311 21:36:46.592448   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.592458   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:46.592467   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:46.592480   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:46.607106   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:46.607146   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:46.691556   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:46.691575   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:46.691587   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:46.772468   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:46.772503   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:46.814478   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:46.814512   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.368451   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:49.383504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:49.383573   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:49.427392   70908 cri.go:89] found id: ""
	I0311 21:36:49.427415   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.427426   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:49.427434   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:49.427493   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:49.469022   70908 cri.go:89] found id: ""
	I0311 21:36:49.469044   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.469052   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:49.469059   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:49.469106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:49.510755   70908 cri.go:89] found id: ""
	I0311 21:36:49.510781   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.510792   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:49.510800   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:49.510886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:49.556594   70908 cri.go:89] found id: ""
	I0311 21:36:49.556631   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.556642   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:49.556649   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:49.556710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:49.597035   70908 cri.go:89] found id: ""
	I0311 21:36:49.597059   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.597067   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:49.597072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:49.597138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:49.642947   70908 cri.go:89] found id: ""
	I0311 21:36:49.642975   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.642985   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:49.642993   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:49.643051   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:49.681401   70908 cri.go:89] found id: ""
	I0311 21:36:49.681423   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.681430   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:49.681435   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:49.681478   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:49.718498   70908 cri.go:89] found id: ""
	I0311 21:36:49.718529   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.718539   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:49.718549   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:49.718563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:49.764483   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:49.764515   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.821261   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:49.821293   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:49.837110   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:49.837135   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:49.918507   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:49.918529   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:49.918541   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:49.648198   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.146837   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.450941   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:51.950760   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.030288   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.528831   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.500354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:52.516722   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:52.516811   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:52.563312   70908 cri.go:89] found id: ""
	I0311 21:36:52.563340   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.563354   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:52.563362   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:52.563421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:52.603545   70908 cri.go:89] found id: ""
	I0311 21:36:52.603572   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.603581   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:52.603588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:52.603657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:52.645624   70908 cri.go:89] found id: ""
	I0311 21:36:52.645648   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.645658   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:52.645665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:52.645722   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:52.693335   70908 cri.go:89] found id: ""
	I0311 21:36:52.693363   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.693373   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:52.693380   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:52.693437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:52.740272   70908 cri.go:89] found id: ""
	I0311 21:36:52.740310   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.740331   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:52.740341   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:52.740398   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:52.786241   70908 cri.go:89] found id: ""
	I0311 21:36:52.786276   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.786285   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:52.786291   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:52.786355   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:52.825013   70908 cri.go:89] found id: ""
	I0311 21:36:52.825042   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.825053   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:52.825061   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:52.825117   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:52.862867   70908 cri.go:89] found id: ""
	I0311 21:36:52.862892   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.862901   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:52.862908   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:52.862922   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:52.917005   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:52.917036   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:52.932086   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:52.932112   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:53.012379   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:53.012402   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:53.012413   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:53.096881   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:53.096913   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:55.640142   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:55.656664   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:55.656749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:55.697962   70908 cri.go:89] found id: ""
	I0311 21:36:55.697992   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.698000   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:55.698005   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:55.698059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:55.741888   70908 cri.go:89] found id: ""
	I0311 21:36:55.741910   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.741917   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:55.741921   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:55.741965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:55.779352   70908 cri.go:89] found id: ""
	I0311 21:36:55.779372   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.779381   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:55.779386   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:55.779430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:55.819496   70908 cri.go:89] found id: ""
	I0311 21:36:55.819530   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.819541   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:55.819549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:55.819612   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:55.859384   70908 cri.go:89] found id: ""
	I0311 21:36:55.859412   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.859419   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:55.859424   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:55.859473   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:55.899415   70908 cri.go:89] found id: ""
	I0311 21:36:55.899438   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.899445   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:55.899450   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:55.899496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:55.938595   70908 cri.go:89] found id: ""
	I0311 21:36:55.938625   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.938637   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:55.938645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:55.938710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:55.980064   70908 cri.go:89] found id: ""
	I0311 21:36:55.980089   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.980096   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:55.980103   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:55.980115   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:55.996222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:55.996297   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:36:54.147743   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.150270   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.648829   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.450767   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.949091   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.950443   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.529184   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:59.029323   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	W0311 21:36:56.081046   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:56.081074   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:56.081090   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:56.167748   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:56.167773   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:56.221118   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:56.221150   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:58.772403   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:58.789349   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:58.789421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:58.829945   70908 cri.go:89] found id: ""
	I0311 21:36:58.829974   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.829985   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:58.829993   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:58.830059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:58.877190   70908 cri.go:89] found id: ""
	I0311 21:36:58.877214   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.877224   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:58.877231   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:58.877295   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:58.920086   70908 cri.go:89] found id: ""
	I0311 21:36:58.920113   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.920122   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:58.920128   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:58.920189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:58.956864   70908 cri.go:89] found id: ""
	I0311 21:36:58.956890   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.956900   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:58.956907   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:58.956967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:58.999363   70908 cri.go:89] found id: ""
	I0311 21:36:58.999390   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.999400   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:58.999408   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:58.999469   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:59.041759   70908 cri.go:89] found id: ""
	I0311 21:36:59.041787   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.041797   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:59.041803   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:59.041850   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:59.084378   70908 cri.go:89] found id: ""
	I0311 21:36:59.084406   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.084417   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:59.084425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:59.084479   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:59.124105   70908 cri.go:89] found id: ""
	I0311 21:36:59.124151   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.124163   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:59.124173   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:59.124188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:59.202060   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:59.202083   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:59.202098   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:59.284025   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:59.284060   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:59.327926   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:59.327951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:59.382505   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:59.382533   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:01.147260   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.149020   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.450230   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.949834   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.529173   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.532427   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.900084   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:01.914495   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:01.914552   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:01.956887   70908 cri.go:89] found id: ""
	I0311 21:37:01.956912   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.956922   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:01.956929   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:01.956986   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:01.995358   70908 cri.go:89] found id: ""
	I0311 21:37:01.995385   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.995394   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:01.995399   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:01.995448   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:02.033949   70908 cri.go:89] found id: ""
	I0311 21:37:02.033974   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.033984   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:02.033991   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:02.034049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:02.074348   70908 cri.go:89] found id: ""
	I0311 21:37:02.074372   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.074382   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:02.074390   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:02.074449   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:02.112456   70908 cri.go:89] found id: ""
	I0311 21:37:02.112479   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.112486   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:02.112491   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:02.112554   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:02.155102   70908 cri.go:89] found id: ""
	I0311 21:37:02.155130   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.155138   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:02.155149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:02.155205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:02.191359   70908 cri.go:89] found id: ""
	I0311 21:37:02.191386   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.191393   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:02.191399   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:02.191450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:02.236178   70908 cri.go:89] found id: ""
	I0311 21:37:02.236203   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.236211   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:02.236220   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:02.236231   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:02.285794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:02.285818   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:02.342348   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:02.342387   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:02.357230   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:02.357257   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:02.431044   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:02.431064   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:02.431076   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.019473   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:05.035841   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:05.035901   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:05.082013   70908 cri.go:89] found id: ""
	I0311 21:37:05.082034   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.082041   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:05.082046   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:05.082091   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:05.126236   70908 cri.go:89] found id: ""
	I0311 21:37:05.126257   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.126265   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:05.126270   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:05.126311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:05.170573   70908 cri.go:89] found id: ""
	I0311 21:37:05.170601   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.170608   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:05.170614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:05.170658   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:05.213921   70908 cri.go:89] found id: ""
	I0311 21:37:05.213948   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.213958   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:05.213965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:05.214025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:05.261178   70908 cri.go:89] found id: ""
	I0311 21:37:05.261206   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.261213   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:05.261221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:05.261273   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:05.306007   70908 cri.go:89] found id: ""
	I0311 21:37:05.306037   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.306045   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:05.306051   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:05.306106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:05.346653   70908 cri.go:89] found id: ""
	I0311 21:37:05.346679   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.346688   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:05.346694   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:05.346752   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:05.384587   70908 cri.go:89] found id: ""
	I0311 21:37:05.384626   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.384637   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:05.384648   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:05.384664   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:05.440676   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:05.440709   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:05.456989   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:05.457018   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:05.553900   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:05.553932   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:05.553947   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.633270   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:05.633300   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:05.647077   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.146975   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.449502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.450008   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.028642   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.529826   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.181935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:08.198179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:08.198251   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:08.236484   70908 cri.go:89] found id: ""
	I0311 21:37:08.236506   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.236516   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:08.236524   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:08.236578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:08.277701   70908 cri.go:89] found id: ""
	I0311 21:37:08.277731   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.277739   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:08.277745   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:08.277804   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:08.319559   70908 cri.go:89] found id: ""
	I0311 21:37:08.319585   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.319596   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:08.319604   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:08.319666   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:08.359752   70908 cri.go:89] found id: ""
	I0311 21:37:08.359777   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.359785   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:08.359791   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:08.359849   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:08.397432   70908 cri.go:89] found id: ""
	I0311 21:37:08.397453   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.397460   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:08.397465   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:08.397511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:08.438708   70908 cri.go:89] found id: ""
	I0311 21:37:08.438732   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.438742   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:08.438749   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:08.438807   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:08.479511   70908 cri.go:89] found id: ""
	I0311 21:37:08.479533   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.479560   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:08.479566   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:08.479620   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:08.521634   70908 cri.go:89] found id: ""
	I0311 21:37:08.521659   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.521670   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:08.521680   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:08.521693   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:08.577033   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:08.577065   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:08.592006   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:08.592030   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:08.680862   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:08.680903   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:08.680919   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:08.764991   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:08.765037   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:10.147819   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.648352   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:10.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.949571   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.028245   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:13.028689   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:15.034232   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.313168   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:11.326808   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:11.326876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:11.364223   70908 cri.go:89] found id: ""
	I0311 21:37:11.364246   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.364254   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:11.364259   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:11.364311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:11.401361   70908 cri.go:89] found id: ""
	I0311 21:37:11.401391   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.401402   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:11.401409   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:11.401459   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:11.441927   70908 cri.go:89] found id: ""
	I0311 21:37:11.441950   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.441957   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:11.441962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:11.442015   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:11.480804   70908 cri.go:89] found id: ""
	I0311 21:37:11.480836   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.480847   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:11.480855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:11.480913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:11.520135   70908 cri.go:89] found id: ""
	I0311 21:37:11.520166   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.520177   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:11.520193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:11.520255   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:11.559214   70908 cri.go:89] found id: ""
	I0311 21:37:11.559244   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.559255   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:11.559263   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:11.559322   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:11.597346   70908 cri.go:89] found id: ""
	I0311 21:37:11.597374   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.597383   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:11.597391   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:11.597452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:11.646095   70908 cri.go:89] found id: ""
	I0311 21:37:11.646118   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.646127   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:11.646137   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:11.646167   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:11.691813   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:11.691844   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:11.745270   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:11.745303   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:11.761107   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:11.761131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:11.841033   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:11.841059   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:11.841074   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:14.431709   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:14.447064   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:14.447131   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:14.493094   70908 cri.go:89] found id: ""
	I0311 21:37:14.493132   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.493140   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:14.493146   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:14.493195   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:14.537391   70908 cri.go:89] found id: ""
	I0311 21:37:14.537415   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.537423   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:14.537428   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:14.537487   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:14.576284   70908 cri.go:89] found id: ""
	I0311 21:37:14.576306   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.576313   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:14.576319   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:14.576375   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:14.627057   70908 cri.go:89] found id: ""
	I0311 21:37:14.627086   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.627097   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:14.627105   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:14.627163   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:14.669204   70908 cri.go:89] found id: ""
	I0311 21:37:14.669226   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.669233   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:14.669238   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:14.669293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:14.708787   70908 cri.go:89] found id: ""
	I0311 21:37:14.708812   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.708820   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:14.708826   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:14.708892   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:14.749795   70908 cri.go:89] found id: ""
	I0311 21:37:14.749819   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.749828   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:14.749835   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:14.749893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:14.794871   70908 cri.go:89] found id: ""
	I0311 21:37:14.794900   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.794911   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:14.794922   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:14.794936   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:14.850022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:14.850050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:14.866589   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:14.866618   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:14.968887   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:14.968906   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:14.968921   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:15.047376   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:15.047404   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:14.648528   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:16.649275   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:18.649842   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:14.951387   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.451239   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.529411   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:20.030012   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.599834   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:17.613610   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:17.613665   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:17.655340   70908 cri.go:89] found id: ""
	I0311 21:37:17.655361   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.655369   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:17.655374   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:17.655416   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:17.695071   70908 cri.go:89] found id: ""
	I0311 21:37:17.695103   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.695114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:17.695121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:17.695178   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:17.731914   70908 cri.go:89] found id: ""
	I0311 21:37:17.731938   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.731946   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:17.731952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:17.732012   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:17.768198   70908 cri.go:89] found id: ""
	I0311 21:37:17.768224   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.768236   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:17.768242   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:17.768301   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:17.802881   70908 cri.go:89] found id: ""
	I0311 21:37:17.802909   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.802920   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:17.802928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:17.802983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:17.841660   70908 cri.go:89] found id: ""
	I0311 21:37:17.841684   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.841692   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:17.841698   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:17.841749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:17.880154   70908 cri.go:89] found id: ""
	I0311 21:37:17.880183   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.880196   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:17.880205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:17.880260   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:17.919797   70908 cri.go:89] found id: ""
	I0311 21:37:17.919822   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.919829   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:17.919837   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:17.919847   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:17.976607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:17.976636   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:17.993313   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:17.993339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:18.069928   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:18.069956   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:18.069973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:18.152257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:18.152285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:20.706553   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:20.721148   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:20.721214   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:20.762913   70908 cri.go:89] found id: ""
	I0311 21:37:20.762935   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.762943   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:20.762952   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:20.762997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:20.811120   70908 cri.go:89] found id: ""
	I0311 21:37:20.811147   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.811158   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:20.811165   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:20.811225   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:20.848987   70908 cri.go:89] found id: ""
	I0311 21:37:20.849015   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.849026   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:20.849033   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:20.849098   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:20.896201   70908 cri.go:89] found id: ""
	I0311 21:37:20.896226   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.896233   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:20.896240   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:20.896299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:20.936570   70908 cri.go:89] found id: ""
	I0311 21:37:20.936595   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.936603   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:20.936608   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:20.936657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:20.977535   70908 cri.go:89] found id: ""
	I0311 21:37:20.977565   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.977576   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:20.977584   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:20.977647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:21.015370   70908 cri.go:89] found id: ""
	I0311 21:37:21.015395   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.015405   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:21.015413   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:21.015472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:21.146868   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:23.147272   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:19.950972   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.450298   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.528109   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.530216   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:21.056190   70908 cri.go:89] found id: ""
	I0311 21:37:21.056214   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.056225   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:21.056235   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:21.056255   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:21.112022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:21.112051   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:21.128841   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:21.128872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:21.209690   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:21.209716   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:21.209732   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:21.291064   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:21.291099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:23.844334   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:23.860000   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:23.860061   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:23.899777   70908 cri.go:89] found id: ""
	I0311 21:37:23.899805   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.899814   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:23.899820   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:23.899879   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:23.941510   70908 cri.go:89] found id: ""
	I0311 21:37:23.941537   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.941547   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:23.941555   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:23.941627   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:23.980564   70908 cri.go:89] found id: ""
	I0311 21:37:23.980592   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.980602   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:23.980614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:23.980676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:24.020310   70908 cri.go:89] found id: ""
	I0311 21:37:24.020337   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.020348   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:24.020354   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:24.020410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:24.059320   70908 cri.go:89] found id: ""
	I0311 21:37:24.059349   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.059359   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:24.059367   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:24.059424   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:24.096625   70908 cri.go:89] found id: ""
	I0311 21:37:24.096652   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.096660   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:24.096666   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:24.096723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:24.137068   70908 cri.go:89] found id: ""
	I0311 21:37:24.137100   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.137112   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:24.137121   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:24.137182   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:24.181298   70908 cri.go:89] found id: ""
	I0311 21:37:24.181325   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.181336   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:24.181348   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:24.181364   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:24.265423   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:24.265454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:24.318088   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:24.318113   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:24.374402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:24.374430   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:24.388934   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:24.388962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:24.475842   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:25.647164   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.650157   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.948984   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.949444   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:28.950697   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.030240   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:29.030848   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.976017   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:26.991533   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:26.991602   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:27.034750   70908 cri.go:89] found id: ""
	I0311 21:37:27.034769   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.034776   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:27.034781   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:27.034837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:27.073275   70908 cri.go:89] found id: ""
	I0311 21:37:27.073301   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.073309   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:27.073317   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:27.073363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:27.113396   70908 cri.go:89] found id: ""
	I0311 21:37:27.113418   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.113425   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:27.113431   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:27.113482   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:27.157442   70908 cri.go:89] found id: ""
	I0311 21:37:27.157465   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.157475   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:27.157482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:27.157534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:27.197277   70908 cri.go:89] found id: ""
	I0311 21:37:27.197302   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.197309   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:27.197315   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:27.197363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:27.237967   70908 cri.go:89] found id: ""
	I0311 21:37:27.237991   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.237999   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:27.238005   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:27.238077   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:27.280434   70908 cri.go:89] found id: ""
	I0311 21:37:27.280459   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.280467   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:27.280472   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:27.280535   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:27.334940   70908 cri.go:89] found id: ""
	I0311 21:37:27.334970   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.334982   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:27.334992   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:27.335010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:27.402535   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:27.402570   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:27.416758   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:27.416787   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:27.492762   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:27.492786   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:27.492803   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:27.576989   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:27.577032   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.124039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:30.138419   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:30.138483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:30.180900   70908 cri.go:89] found id: ""
	I0311 21:37:30.180926   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.180936   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:30.180944   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:30.180998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:30.222886   70908 cri.go:89] found id: ""
	I0311 21:37:30.222913   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.222921   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:30.222926   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:30.222976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:30.264332   70908 cri.go:89] found id: ""
	I0311 21:37:30.264357   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.264367   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:30.264376   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:30.264436   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:30.307084   70908 cri.go:89] found id: ""
	I0311 21:37:30.307112   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.307123   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:30.307130   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:30.307188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:30.345954   70908 cri.go:89] found id: ""
	I0311 21:37:30.345979   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.345990   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:30.345997   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:30.346057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:30.389408   70908 cri.go:89] found id: ""
	I0311 21:37:30.389439   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.389450   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:30.389457   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:30.389517   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:30.438380   70908 cri.go:89] found id: ""
	I0311 21:37:30.438410   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.438420   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:30.438427   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:30.438489   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:30.479860   70908 cri.go:89] found id: ""
	I0311 21:37:30.479884   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.479895   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:30.479906   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:30.479920   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:30.535831   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:30.535857   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:30.552702   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:30.552725   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:30.633417   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:30.633439   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:30.633454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:30.723106   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:30.723143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.147993   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:32.152839   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.450942   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.949947   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.528469   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.529721   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.270654   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:33.296640   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:33.296710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:33.366053   70908 cri.go:89] found id: ""
	I0311 21:37:33.366082   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.366093   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:33.366101   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:33.366161   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:33.421455   70908 cri.go:89] found id: ""
	I0311 21:37:33.421488   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.421501   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:33.421509   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:33.421583   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:33.464555   70908 cri.go:89] found id: ""
	I0311 21:37:33.464579   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.464586   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:33.464592   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:33.464647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:33.507044   70908 cri.go:89] found id: ""
	I0311 21:37:33.507086   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.507100   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:33.507110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:33.507175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:33.561446   70908 cri.go:89] found id: ""
	I0311 21:37:33.561518   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.561532   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:33.561540   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:33.561601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:33.604496   70908 cri.go:89] found id: ""
	I0311 21:37:33.604519   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.604528   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:33.604534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:33.604591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:33.645754   70908 cri.go:89] found id: ""
	I0311 21:37:33.645781   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.645791   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:33.645797   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:33.645869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:33.690041   70908 cri.go:89] found id: ""
	I0311 21:37:33.690071   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.690082   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:33.690092   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:33.690108   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:33.765708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:33.765737   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:33.765752   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:33.848869   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:33.848906   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:33.900191   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:33.900223   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:33.957101   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:33.957138   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:34.646831   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.647640   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.449429   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.948831   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.028141   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.028588   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.028676   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.474442   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:36.490159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:36.490231   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:36.537784   70908 cri.go:89] found id: ""
	I0311 21:37:36.537812   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.537822   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:36.537829   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:36.537885   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:36.581192   70908 cri.go:89] found id: ""
	I0311 21:37:36.581219   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.581230   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:36.581237   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:36.581297   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:36.620448   70908 cri.go:89] found id: ""
	I0311 21:37:36.620480   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.620492   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:36.620501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:36.620566   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:36.662135   70908 cri.go:89] found id: ""
	I0311 21:37:36.662182   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.662193   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:36.662203   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:36.662268   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:36.708138   70908 cri.go:89] found id: ""
	I0311 21:37:36.708178   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.708188   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:36.708198   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:36.708267   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:36.749668   70908 cri.go:89] found id: ""
	I0311 21:37:36.749697   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.749708   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:36.749717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:36.749783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:36.788455   70908 cri.go:89] found id: ""
	I0311 21:37:36.788476   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.788483   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:36.788488   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:36.788534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:36.830216   70908 cri.go:89] found id: ""
	I0311 21:37:36.830244   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.830257   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:36.830267   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:36.830285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:36.915306   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:36.915336   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:36.958861   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:36.958892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:37.014463   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:37.014489   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:37.029979   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:37.030010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:37.106840   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:39.607929   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:39.626247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:39.626307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:39.667409   70908 cri.go:89] found id: ""
	I0311 21:37:39.667436   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.667446   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:39.667454   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:39.667509   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:39.714167   70908 cri.go:89] found id: ""
	I0311 21:37:39.714198   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.714210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:39.714217   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:39.714275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:39.754759   70908 cri.go:89] found id: ""
	I0311 21:37:39.754787   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.754798   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:39.754805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:39.754865   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:39.794999   70908 cri.go:89] found id: ""
	I0311 21:37:39.795028   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.795038   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:39.795045   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:39.795108   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:39.836284   70908 cri.go:89] found id: ""
	I0311 21:37:39.836310   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.836321   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:39.836328   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:39.836386   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:39.876487   70908 cri.go:89] found id: ""
	I0311 21:37:39.876518   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.876530   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:39.876539   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:39.876601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:39.918750   70908 cri.go:89] found id: ""
	I0311 21:37:39.918785   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.918796   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:39.918813   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:39.918871   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:39.958486   70908 cri.go:89] found id: ""
	I0311 21:37:39.958517   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.958529   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:39.958537   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:39.958550   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:39.973899   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:39.973925   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:40.055954   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:40.055980   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:40.055995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:40.144801   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:40.144826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:40.189692   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:40.189722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:39.148581   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:41.647869   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:43.648550   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.949502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.951277   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.528844   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:44.529317   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.748909   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:42.763794   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:42.763877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:42.801470   70908 cri.go:89] found id: ""
	I0311 21:37:42.801493   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.801500   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:42.801506   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:42.801561   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:42.846267   70908 cri.go:89] found id: ""
	I0311 21:37:42.846294   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.846301   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:42.846307   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:42.846357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:42.890257   70908 cri.go:89] found id: ""
	I0311 21:37:42.890283   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.890294   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:42.890301   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:42.890357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:42.933605   70908 cri.go:89] found id: ""
	I0311 21:37:42.933628   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.933636   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:42.933643   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:42.933699   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:42.979020   70908 cri.go:89] found id: ""
	I0311 21:37:42.979043   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.979052   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:42.979059   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:42.979122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:43.021695   70908 cri.go:89] found id: ""
	I0311 21:37:43.021724   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.021734   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:43.021741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:43.021801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:43.064356   70908 cri.go:89] found id: ""
	I0311 21:37:43.064398   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.064406   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:43.064412   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:43.064457   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:43.101878   70908 cri.go:89] found id: ""
	I0311 21:37:43.101901   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.101909   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:43.101917   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:43.101930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:43.185836   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:43.185861   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:43.185874   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:43.268879   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:43.268912   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:43.319582   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:43.319614   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:43.374996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:43.375022   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:45.890408   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:45.905973   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:45.906041   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:45.951994   70908 cri.go:89] found id: ""
	I0311 21:37:45.952025   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.952040   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:45.952049   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:45.952112   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:45.992913   70908 cri.go:89] found id: ""
	I0311 21:37:45.992953   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.992964   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:45.992971   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:45.993034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:46.036306   70908 cri.go:89] found id: ""
	I0311 21:37:46.036334   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.036345   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:46.036353   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:46.036410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:46.147754   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:48.647534   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:45.450180   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:47.949568   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.532244   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.028905   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.077532   70908 cri.go:89] found id: ""
	I0311 21:37:46.077564   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.077576   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:46.077583   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:46.077633   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:46.115953   70908 cri.go:89] found id: ""
	I0311 21:37:46.115976   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.115983   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:46.115990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:46.116072   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:46.155665   70908 cri.go:89] found id: ""
	I0311 21:37:46.155699   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.155709   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:46.155717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:46.155775   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:46.197650   70908 cri.go:89] found id: ""
	I0311 21:37:46.197677   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.197696   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:46.197705   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:46.197766   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:46.243006   70908 cri.go:89] found id: ""
	I0311 21:37:46.243030   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.243037   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:46.243045   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:46.243058   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:46.294668   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:46.294696   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:46.308700   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:46.308721   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:46.387188   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:46.387207   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:46.387219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:46.480390   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:46.480423   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.027202   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:49.042292   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:49.042361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:49.081547   70908 cri.go:89] found id: ""
	I0311 21:37:49.081568   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.081579   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:49.081585   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:49.081632   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:49.127438   70908 cri.go:89] found id: ""
	I0311 21:37:49.127467   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.127477   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:49.127485   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:49.127545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:49.173992   70908 cri.go:89] found id: ""
	I0311 21:37:49.174024   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.174033   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:49.174042   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:49.174114   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:49.217087   70908 cri.go:89] found id: ""
	I0311 21:37:49.217120   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.217130   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:49.217138   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:49.217198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:49.255929   70908 cri.go:89] found id: ""
	I0311 21:37:49.255955   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.255970   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:49.255978   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:49.256037   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:49.296373   70908 cri.go:89] found id: ""
	I0311 21:37:49.296399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.296409   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:49.296417   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:49.296474   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:49.335063   70908 cri.go:89] found id: ""
	I0311 21:37:49.335092   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.335103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:49.335110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:49.335176   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:49.378374   70908 cri.go:89] found id: ""
	I0311 21:37:49.378399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.378406   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:49.378414   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:49.378427   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.422193   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:49.422220   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:49.474861   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:49.474893   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:49.490193   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:49.490219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:49.571857   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:49.571880   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:49.571895   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:51.149814   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.648033   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.949603   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.949943   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.951963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.531753   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:54.028723   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:52.168934   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:52.183086   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:52.183154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:52.221632   70908 cri.go:89] found id: ""
	I0311 21:37:52.221664   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.221675   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:52.221682   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:52.221743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:52.261550   70908 cri.go:89] found id: ""
	I0311 21:37:52.261575   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.261582   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:52.261588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:52.261638   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:52.302879   70908 cri.go:89] found id: ""
	I0311 21:37:52.302910   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.302920   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:52.302927   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:52.302987   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:52.346462   70908 cri.go:89] found id: ""
	I0311 21:37:52.346485   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.346494   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:52.346499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:52.346551   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:52.387949   70908 cri.go:89] found id: ""
	I0311 21:37:52.387977   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.387988   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:52.387995   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:52.388052   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:52.428527   70908 cri.go:89] found id: ""
	I0311 21:37:52.428564   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.428574   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:52.428582   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:52.428649   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:52.469516   70908 cri.go:89] found id: ""
	I0311 21:37:52.469548   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.469558   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:52.469565   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:52.469616   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:52.508371   70908 cri.go:89] found id: ""
	I0311 21:37:52.508407   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.508417   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:52.508429   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:52.508444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:52.587309   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:52.587346   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:52.587361   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:52.666419   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:52.666449   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:52.713150   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:52.713184   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:52.768011   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:52.768041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.284835   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:55.298742   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:55.298799   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:55.340215   70908 cri.go:89] found id: ""
	I0311 21:37:55.340240   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.340251   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:55.340257   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:55.340321   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:55.377930   70908 cri.go:89] found id: ""
	I0311 21:37:55.377956   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.377967   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:55.377974   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:55.378039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:55.418786   70908 cri.go:89] found id: ""
	I0311 21:37:55.418814   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.418822   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:55.418827   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:55.418883   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:55.461566   70908 cri.go:89] found id: ""
	I0311 21:37:55.461586   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.461593   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:55.461601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:55.461655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:55.502917   70908 cri.go:89] found id: ""
	I0311 21:37:55.502945   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.502955   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:55.502962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:55.503022   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:55.551417   70908 cri.go:89] found id: ""
	I0311 21:37:55.551441   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.551454   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:55.551462   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:55.551514   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:55.596060   70908 cri.go:89] found id: ""
	I0311 21:37:55.596092   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.596103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:55.596111   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:55.596172   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:55.635495   70908 cri.go:89] found id: ""
	I0311 21:37:55.635523   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.635535   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:55.635547   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:55.635564   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:55.691705   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:55.691735   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.707696   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:55.707718   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:55.780432   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:55.780452   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:55.780465   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:55.866033   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:55.866067   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:55.648873   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.452135   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.951150   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.528533   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.529769   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.437299   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:58.453058   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:58.453125   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:58.493317   70908 cri.go:89] found id: ""
	I0311 21:37:58.493339   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.493347   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:58.493353   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:58.493408   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:58.543533   70908 cri.go:89] found id: ""
	I0311 21:37:58.543556   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.543567   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:58.543578   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:58.543634   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:58.585255   70908 cri.go:89] found id: ""
	I0311 21:37:58.585282   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.585292   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:58.585300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:58.585359   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:58.622393   70908 cri.go:89] found id: ""
	I0311 21:37:58.622421   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.622428   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:58.622434   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:58.622501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:58.661939   70908 cri.go:89] found id: ""
	I0311 21:37:58.661963   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.661971   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:58.661977   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:58.662034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:58.703628   70908 cri.go:89] found id: ""
	I0311 21:37:58.703663   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.703674   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:58.703682   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:58.703743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:58.742553   70908 cri.go:89] found id: ""
	I0311 21:37:58.742583   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.742594   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:58.742601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:58.742662   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:58.785016   70908 cri.go:89] found id: ""
	I0311 21:37:58.785040   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.785047   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:58.785055   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:58.785071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:58.857757   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:58.857773   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:58.857786   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:58.946120   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:58.946148   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:58.996288   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:58.996328   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:59.055371   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:59.055407   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:00.651621   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.149663   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:00.951776   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.451012   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.028303   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.028600   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.032276   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.571092   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:01.591149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:01.591238   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:01.629156   70908 cri.go:89] found id: ""
	I0311 21:38:01.629184   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.629196   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:01.629203   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:01.629261   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:01.673656   70908 cri.go:89] found id: ""
	I0311 21:38:01.673680   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.673687   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:01.673692   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:01.673739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:01.713361   70908 cri.go:89] found id: ""
	I0311 21:38:01.713389   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.713397   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:01.713403   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:01.713450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:01.757256   70908 cri.go:89] found id: ""
	I0311 21:38:01.757286   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.757298   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:01.757305   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:01.757362   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:01.797538   70908 cri.go:89] found id: ""
	I0311 21:38:01.797565   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.797573   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:01.797580   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:01.797635   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:01.838664   70908 cri.go:89] found id: ""
	I0311 21:38:01.838692   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.838701   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:01.838707   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:01.838754   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:01.893638   70908 cri.go:89] found id: ""
	I0311 21:38:01.893668   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.893679   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:01.893686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:01.893747   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:01.935547   70908 cri.go:89] found id: ""
	I0311 21:38:01.935569   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.935577   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:01.935585   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:01.935596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:01.989964   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:01.989988   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:02.004949   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:02.004973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:02.082006   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:02.082024   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:02.082041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:02.171040   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:02.171072   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:04.724699   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:04.741445   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:04.741512   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:04.783924   70908 cri.go:89] found id: ""
	I0311 21:38:04.783951   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.783962   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:04.783969   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:04.784028   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:04.825806   70908 cri.go:89] found id: ""
	I0311 21:38:04.825835   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.825845   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:04.825852   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:04.825913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:04.864070   70908 cri.go:89] found id: ""
	I0311 21:38:04.864106   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.864118   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:04.864126   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:04.864181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:04.901735   70908 cri.go:89] found id: ""
	I0311 21:38:04.901759   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.901769   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:04.901777   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:04.901832   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:04.941473   70908 cri.go:89] found id: ""
	I0311 21:38:04.941496   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.941505   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:04.941513   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:04.941569   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:04.993132   70908 cri.go:89] found id: ""
	I0311 21:38:04.993162   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.993170   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:04.993178   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:04.993237   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:05.037925   70908 cri.go:89] found id: ""
	I0311 21:38:05.037950   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.037960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:05.037967   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:05.038026   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:05.080726   70908 cri.go:89] found id: ""
	I0311 21:38:05.080773   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.080784   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:05.080794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:05.080806   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:05.138205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:05.138233   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:05.155048   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:05.155071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:05.233067   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:05.233086   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:05.233099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:05.317897   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:05.317928   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:05.646661   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.647686   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.949900   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.950261   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.528049   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:09.530724   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.863484   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:07.877342   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:07.877411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:07.916352   70908 cri.go:89] found id: ""
	I0311 21:38:07.916374   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.916383   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:07.916391   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:07.916454   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:07.954833   70908 cri.go:89] found id: ""
	I0311 21:38:07.954854   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.954863   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:07.954870   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:07.954926   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:07.993124   70908 cri.go:89] found id: ""
	I0311 21:38:07.993152   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.993161   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:07.993168   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:07.993232   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:08.039081   70908 cri.go:89] found id: ""
	I0311 21:38:08.039108   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.039118   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:08.039125   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:08.039191   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:08.084627   70908 cri.go:89] found id: ""
	I0311 21:38:08.084650   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.084658   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:08.084665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:08.084712   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:08.125986   70908 cri.go:89] found id: ""
	I0311 21:38:08.126015   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.126026   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:08.126034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:08.126080   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:08.167149   70908 cri.go:89] found id: ""
	I0311 21:38:08.167176   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.167188   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:08.167193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:08.167252   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:08.204988   70908 cri.go:89] found id: ""
	I0311 21:38:08.205012   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.205020   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:08.205028   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:08.205043   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:08.295226   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:08.295268   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:08.357789   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:08.357820   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:08.434091   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:08.434132   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:08.455208   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:08.455240   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:08.529620   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.030060   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:09.648047   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.649628   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:13.652370   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:10.450139   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:12.949551   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.531354   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.029703   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.044303   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:11.046353   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:11.088067   70908 cri.go:89] found id: ""
	I0311 21:38:11.088099   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.088110   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:11.088117   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:11.088177   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:11.131077   70908 cri.go:89] found id: ""
	I0311 21:38:11.131104   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.131114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:11.131121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:11.131181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:11.172409   70908 cri.go:89] found id: ""
	I0311 21:38:11.172431   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.172439   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:11.172444   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:11.172496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:11.216775   70908 cri.go:89] found id: ""
	I0311 21:38:11.216817   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.216825   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:11.216830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:11.216886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:11.255105   70908 cri.go:89] found id: ""
	I0311 21:38:11.255129   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.255137   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:11.255142   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:11.255205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:11.292397   70908 cri.go:89] found id: ""
	I0311 21:38:11.292429   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.292440   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:11.292448   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:11.292518   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:11.330376   70908 cri.go:89] found id: ""
	I0311 21:38:11.330397   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.330408   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:11.330415   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:11.330476   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:11.367699   70908 cri.go:89] found id: ""
	I0311 21:38:11.367727   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.367737   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:11.367748   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:11.367763   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:11.421847   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:11.421876   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:11.437570   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:11.437593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:11.522084   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.522108   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:11.522123   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:11.606181   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:11.606228   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.153952   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:14.175726   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:14.175798   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:14.221752   70908 cri.go:89] found id: ""
	I0311 21:38:14.221784   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.221798   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:14.221807   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:14.221895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:14.286690   70908 cri.go:89] found id: ""
	I0311 21:38:14.286720   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.286740   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:14.286757   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:14.286824   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:14.343764   70908 cri.go:89] found id: ""
	I0311 21:38:14.343790   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.343799   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:14.343806   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:14.343876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:14.381198   70908 cri.go:89] found id: ""
	I0311 21:38:14.381220   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.381230   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:14.381237   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:14.381307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:14.421578   70908 cri.go:89] found id: ""
	I0311 21:38:14.421603   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.421613   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:14.421620   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:14.421678   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:14.462945   70908 cri.go:89] found id: ""
	I0311 21:38:14.462972   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.462982   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:14.462990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:14.463049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:14.503503   70908 cri.go:89] found id: ""
	I0311 21:38:14.503532   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.503543   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:14.503550   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:14.503610   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:14.543987   70908 cri.go:89] found id: ""
	I0311 21:38:14.544021   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.544034   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:14.544045   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:14.544062   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:14.624781   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:14.624804   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:14.624821   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:14.707130   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:14.707161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.750815   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:14.750848   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:14.806855   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:14.806882   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:16.149516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.646716   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.949827   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.953660   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.031935   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.529085   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:17.325267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:17.340421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:17.340483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:17.382808   70908 cri.go:89] found id: ""
	I0311 21:38:17.382831   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.382841   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:17.382849   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:17.382906   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:17.424838   70908 cri.go:89] found id: ""
	I0311 21:38:17.424865   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.424875   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:17.424883   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:17.424940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:17.466298   70908 cri.go:89] found id: ""
	I0311 21:38:17.466320   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.466327   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:17.466333   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:17.466397   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:17.506648   70908 cri.go:89] found id: ""
	I0311 21:38:17.506678   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.506685   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:17.506691   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:17.506739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:17.544019   70908 cri.go:89] found id: ""
	I0311 21:38:17.544048   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.544057   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:17.544067   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:17.544154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:17.583691   70908 cri.go:89] found id: ""
	I0311 21:38:17.583710   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.583717   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:17.583723   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:17.583768   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:17.624432   70908 cri.go:89] found id: ""
	I0311 21:38:17.624453   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.624460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:17.624466   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:17.624516   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:17.663253   70908 cri.go:89] found id: ""
	I0311 21:38:17.663294   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.663312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:17.663322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:17.663339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:17.749928   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:17.749962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:17.792817   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:17.792853   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:17.847391   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:17.847419   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:17.862813   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:17.862835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:17.935307   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.435995   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:20.452441   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:20.452510   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:20.491960   70908 cri.go:89] found id: ""
	I0311 21:38:20.491985   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.491992   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:20.491998   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:20.492045   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:20.531679   70908 cri.go:89] found id: ""
	I0311 21:38:20.531700   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.531707   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:20.531712   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:20.531764   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:20.571666   70908 cri.go:89] found id: ""
	I0311 21:38:20.571687   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.571694   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:20.571699   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:20.571762   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:20.611165   70908 cri.go:89] found id: ""
	I0311 21:38:20.611187   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.611194   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:20.611199   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:20.611248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:20.648680   70908 cri.go:89] found id: ""
	I0311 21:38:20.648709   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.648720   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:20.648728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:20.648801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:20.690177   70908 cri.go:89] found id: ""
	I0311 21:38:20.690204   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.690215   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:20.690222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:20.690298   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:20.728918   70908 cri.go:89] found id: ""
	I0311 21:38:20.728949   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.728960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:20.728968   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:20.729039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:20.773559   70908 cri.go:89] found id: ""
	I0311 21:38:20.773586   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.773596   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:20.773607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:20.773623   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:20.788709   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:20.788750   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:20.869832   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.869856   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:20.869868   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:20.963515   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:20.963544   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:21.007029   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:21.007055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:21.147703   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.660410   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:19.449416   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:21.451194   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.950401   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:20.529497   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:22.529947   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:25.030431   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.566134   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:23.583855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:23.583911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:23.623605   70908 cri.go:89] found id: ""
	I0311 21:38:23.623633   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.623656   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:23.623664   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:23.623719   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:23.663058   70908 cri.go:89] found id: ""
	I0311 21:38:23.663081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.663091   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:23.663098   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:23.663157   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:23.701930   70908 cri.go:89] found id: ""
	I0311 21:38:23.701963   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.701975   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:23.701985   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:23.702049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:23.743925   70908 cri.go:89] found id: ""
	I0311 21:38:23.743955   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.743964   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:23.743970   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:23.744046   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:23.784030   70908 cri.go:89] found id: ""
	I0311 21:38:23.784055   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.784066   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:23.784073   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:23.784132   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:23.823054   70908 cri.go:89] found id: ""
	I0311 21:38:23.823081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.823089   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:23.823097   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:23.823156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:23.863629   70908 cri.go:89] found id: ""
	I0311 21:38:23.863654   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.863662   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:23.863668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:23.863724   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:23.904429   70908 cri.go:89] found id: ""
	I0311 21:38:23.904454   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.904462   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:23.904470   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:23.904481   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:23.962356   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:23.962393   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:23.977667   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:23.977689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:24.068791   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:24.068820   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:24.068835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:24.157857   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:24.157892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:26.147447   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.148069   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.450243   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.950495   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:27.530194   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:30.029286   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.705872   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:26.720840   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:26.720936   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:26.766449   70908 cri.go:89] found id: ""
	I0311 21:38:26.766480   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.766490   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:26.766496   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:26.766557   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:26.806179   70908 cri.go:89] found id: ""
	I0311 21:38:26.806203   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.806210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:26.806216   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:26.806275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:26.850737   70908 cri.go:89] found id: ""
	I0311 21:38:26.850765   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.850775   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:26.850785   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:26.850845   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:26.897694   70908 cri.go:89] found id: ""
	I0311 21:38:26.897722   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.897733   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:26.897744   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:26.897802   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:26.940940   70908 cri.go:89] found id: ""
	I0311 21:38:26.940962   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.940969   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:26.940975   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:26.941021   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:26.978576   70908 cri.go:89] found id: ""
	I0311 21:38:26.978604   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.978614   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:26.978625   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:26.978682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:27.016331   70908 cri.go:89] found id: ""
	I0311 21:38:27.016363   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.016374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:27.016381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:27.016439   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:27.061541   70908 cri.go:89] found id: ""
	I0311 21:38:27.061569   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.061580   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:27.061590   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:27.061609   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:27.154977   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:27.155017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:27.204458   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:27.204488   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:27.259960   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:27.259997   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:27.277806   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:27.277832   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:27.356111   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:29.856828   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:29.871331   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:29.871413   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:29.912867   70908 cri.go:89] found id: ""
	I0311 21:38:29.912895   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.912904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:29.912910   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:29.912973   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:29.953458   70908 cri.go:89] found id: ""
	I0311 21:38:29.953483   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.953491   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:29.953497   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:29.953553   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:29.997873   70908 cri.go:89] found id: ""
	I0311 21:38:29.997904   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.997912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:29.997921   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:29.997983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:30.038831   70908 cri.go:89] found id: ""
	I0311 21:38:30.038861   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.038872   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:30.038880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:30.038940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:30.082089   70908 cri.go:89] found id: ""
	I0311 21:38:30.082117   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.082127   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:30.082135   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:30.082213   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:30.121167   70908 cri.go:89] found id: ""
	I0311 21:38:30.121198   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.121209   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:30.121216   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:30.121274   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:30.162342   70908 cri.go:89] found id: ""
	I0311 21:38:30.162371   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.162380   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:30.162393   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:30.162452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:30.201727   70908 cri.go:89] found id: ""
	I0311 21:38:30.201753   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.201761   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:30.201769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:30.201780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:30.283314   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:30.283346   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:30.333900   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:30.333930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:30.391761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:30.391798   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:30.407907   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:30.407930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:30.489560   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:30.646773   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.649048   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:31.456251   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:33.951315   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.529160   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:34.530183   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.989976   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:33.004724   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:33.004814   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:33.049701   70908 cri.go:89] found id: ""
	I0311 21:38:33.049733   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.049743   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:33.049753   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:33.049823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:33.097759   70908 cri.go:89] found id: ""
	I0311 21:38:33.097792   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.097804   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:33.097811   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:33.097875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:33.143257   70908 cri.go:89] found id: ""
	I0311 21:38:33.143291   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.143300   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:33.143308   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:33.143376   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:33.187434   70908 cri.go:89] found id: ""
	I0311 21:38:33.187464   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.187477   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:33.187483   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:33.187558   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:33.236201   70908 cri.go:89] found id: ""
	I0311 21:38:33.236230   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.236239   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:33.236245   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:33.236312   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:33.279710   70908 cri.go:89] found id: ""
	I0311 21:38:33.279783   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.279816   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:33.279830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:33.279898   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:33.325022   70908 cri.go:89] found id: ""
	I0311 21:38:33.325053   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.325064   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:33.325072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:33.325138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:33.368588   70908 cri.go:89] found id: ""
	I0311 21:38:33.368614   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.368622   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:33.368629   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:33.368640   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:33.427761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:33.427801   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:33.444440   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:33.444472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:33.527745   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:33.527764   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:33.527775   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:33.608215   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:33.608248   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:35.146541   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:37.146917   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.450175   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:38.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.531125   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:39.028780   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.158253   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:36.172370   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:36.172438   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:36.216905   70908 cri.go:89] found id: ""
	I0311 21:38:36.216935   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.216945   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:36.216951   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:36.216996   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:36.260844   70908 cri.go:89] found id: ""
	I0311 21:38:36.260875   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.260885   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:36.260890   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:36.260941   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:36.306730   70908 cri.go:89] found id: ""
	I0311 21:38:36.306755   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.306767   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:36.306772   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:36.306820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:36.346957   70908 cri.go:89] found id: ""
	I0311 21:38:36.346993   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.347004   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:36.347012   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:36.347082   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:36.392265   70908 cri.go:89] found id: ""
	I0311 21:38:36.392295   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.392306   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:36.392313   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:36.392379   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:36.433383   70908 cri.go:89] found id: ""
	I0311 21:38:36.433407   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.433414   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:36.433421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:36.433467   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:36.471291   70908 cri.go:89] found id: ""
	I0311 21:38:36.471325   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.471336   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:36.471344   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:36.471411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:36.514662   70908 cri.go:89] found id: ""
	I0311 21:38:36.514688   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.514698   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:36.514708   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:36.514722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:36.533222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:36.533251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:36.616359   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:36.616384   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:36.616400   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:36.719105   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:36.719137   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:36.771125   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:36.771156   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.324847   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:39.341149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:39.341218   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:39.380284   70908 cri.go:89] found id: ""
	I0311 21:38:39.380324   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.380335   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:39.380343   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:39.380407   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:39.429860   70908 cri.go:89] found id: ""
	I0311 21:38:39.429886   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.429894   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:39.429899   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:39.429960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:39.468089   70908 cri.go:89] found id: ""
	I0311 21:38:39.468113   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.468121   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:39.468127   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:39.468188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:39.508589   70908 cri.go:89] found id: ""
	I0311 21:38:39.508617   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.508628   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:39.508636   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:39.508695   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:39.552427   70908 cri.go:89] found id: ""
	I0311 21:38:39.552451   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.552459   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:39.552464   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:39.552511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:39.592586   70908 cri.go:89] found id: ""
	I0311 21:38:39.592607   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.592615   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:39.592621   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:39.592670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:39.637138   70908 cri.go:89] found id: ""
	I0311 21:38:39.637167   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.637178   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:39.637186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:39.637248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:39.679422   70908 cri.go:89] found id: ""
	I0311 21:38:39.679457   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.679470   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:39.679482   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:39.679499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.734815   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:39.734850   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:39.750448   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:39.750472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:39.832912   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:39.832936   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:39.832951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:39.924020   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:39.924061   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:39.648759   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.146226   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:40.950021   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.951344   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:41.528407   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529130   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529166   70458 pod_ready.go:81] duration metric: took 4m0.007627735s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:43.529179   70458 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0311 21:38:43.529188   70458 pod_ready.go:38] duration metric: took 4m4.551429192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:43.529207   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:38:43.529242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:43.529306   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:43.589292   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:43.589314   70458 cri.go:89] found id: ""
	I0311 21:38:43.589323   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:43.589388   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.595182   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:43.595267   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:43.645002   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:43.645027   70458 cri.go:89] found id: ""
	I0311 21:38:43.645036   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:43.645088   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.650463   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:43.650537   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:43.693876   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:43.693894   70458 cri.go:89] found id: ""
	I0311 21:38:43.693902   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:43.693958   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.699273   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:43.699340   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:43.752552   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:43.752585   70458 cri.go:89] found id: ""
	I0311 21:38:43.752596   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:43.752667   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.758307   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:43.758384   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:43.802761   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:43.802789   70458 cri.go:89] found id: ""
	I0311 21:38:43.802798   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:43.802858   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.807796   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:43.807867   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:43.853820   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:43.853843   70458 cri.go:89] found id: ""
	I0311 21:38:43.853851   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:43.853907   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.859377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:43.859451   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:43.910605   70458 cri.go:89] found id: ""
	I0311 21:38:43.910640   70458 logs.go:276] 0 containers: []
	W0311 21:38:43.910648   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:43.910655   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:43.910702   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:43.955602   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:43.955624   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:43.955629   70458 cri.go:89] found id: ""
	I0311 21:38:43.955645   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:43.955713   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.960856   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.965889   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:43.965919   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:44.013879   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:44.013908   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:44.064641   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:44.064669   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:44.118095   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:44.118120   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:44.177775   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:44.177819   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.242090   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:44.242129   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:44.261628   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:44.261665   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:44.322616   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:44.322656   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:44.388117   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:44.388159   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:44.445980   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:44.446018   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:44.980199   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:44.980243   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:45.138312   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:45.138368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:45.208626   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:45.208664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:42.472932   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:42.488034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:42.488090   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:42.530945   70908 cri.go:89] found id: ""
	I0311 21:38:42.530971   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.530981   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:42.530989   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:42.531053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:42.571906   70908 cri.go:89] found id: ""
	I0311 21:38:42.571939   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.571951   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:42.571960   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:42.572029   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:42.613198   70908 cri.go:89] found id: ""
	I0311 21:38:42.613228   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.613239   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:42.613247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:42.613330   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:42.654740   70908 cri.go:89] found id: ""
	I0311 21:38:42.654762   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.654770   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:42.654775   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:42.654821   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:42.694797   70908 cri.go:89] found id: ""
	I0311 21:38:42.694836   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.694847   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:42.694854   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:42.694931   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:42.738918   70908 cri.go:89] found id: ""
	I0311 21:38:42.738946   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.738958   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:42.738965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:42.739032   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:42.780836   70908 cri.go:89] found id: ""
	I0311 21:38:42.780870   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.780881   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:42.780888   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:42.780943   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:42.824672   70908 cri.go:89] found id: ""
	I0311 21:38:42.824701   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.824712   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:42.824721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:42.824747   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:42.877219   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:42.877253   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:42.934996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:42.935033   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:42.952125   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:42.952152   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:43.036657   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:43.036678   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:43.036695   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:45.629959   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:45.648501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:45.648581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:45.690083   70908 cri.go:89] found id: ""
	I0311 21:38:45.690117   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.690128   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:45.690136   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:45.690201   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:45.736497   70908 cri.go:89] found id: ""
	I0311 21:38:45.736519   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.736526   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:45.736531   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:45.736576   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:45.778590   70908 cri.go:89] found id: ""
	I0311 21:38:45.778625   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.778636   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:45.778645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:45.778723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:45.822322   70908 cri.go:89] found id: ""
	I0311 21:38:45.822351   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.822359   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:45.822365   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:45.822419   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:45.868591   70908 cri.go:89] found id: ""
	I0311 21:38:45.868618   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.868627   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:45.868633   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:45.868680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:45.915137   70908 cri.go:89] found id: ""
	I0311 21:38:45.915165   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.915178   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:45.915187   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:45.915258   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:45.960432   70908 cri.go:89] found id: ""
	I0311 21:38:45.960459   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.960469   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:45.960476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:45.960529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:46.006089   70908 cri.go:89] found id: ""
	I0311 21:38:46.006168   70908 logs.go:276] 0 containers: []
	W0311 21:38:46.006185   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:46.006195   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:46.006209   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.153091   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.650654   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:44.951550   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.952791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:47.756629   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:47.776613   70458 api_server.go:72] duration metric: took 4m14.182101385s to wait for apiserver process to appear ...
	I0311 21:38:47.776651   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:38:47.776691   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:47.776774   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:47.826534   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:47.826553   70458 cri.go:89] found id: ""
	I0311 21:38:47.826560   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:47.826609   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.831565   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:47.831637   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:47.876504   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:47.876531   70458 cri.go:89] found id: ""
	I0311 21:38:47.876541   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:47.876598   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.882130   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:47.882224   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:47.930064   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:47.930087   70458 cri.go:89] found id: ""
	I0311 21:38:47.930096   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:47.930139   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.935357   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:47.935433   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:47.989169   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:47.989196   70458 cri.go:89] found id: ""
	I0311 21:38:47.989206   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:47.989262   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.994341   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:47.994401   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:48.037592   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:48.037619   70458 cri.go:89] found id: ""
	I0311 21:38:48.037629   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:48.037692   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.043377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:48.043453   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:48.088629   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.088651   70458 cri.go:89] found id: ""
	I0311 21:38:48.088671   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:48.088722   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.093944   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:48.094016   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:48.144943   70458 cri.go:89] found id: ""
	I0311 21:38:48.144971   70458 logs.go:276] 0 containers: []
	W0311 21:38:48.144983   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:48.144990   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:48.145050   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:48.188857   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:48.188877   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.188881   70458 cri.go:89] found id: ""
	I0311 21:38:48.188887   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:48.188934   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.195123   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.200643   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:48.200673   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.246864   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:48.246894   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:48.715510   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:48.715545   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:48.775676   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:48.775716   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:48.793121   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:48.793157   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:48.863992   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:48.864040   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:48.922775   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:48.922810   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.996820   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:48.996866   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:49.045065   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.045097   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:49.199072   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:49.199137   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:49.283329   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:49.283360   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:49.340461   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:49.340502   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:49.391436   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.391460   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:46.064257   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:46.064296   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:46.080304   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:46.080337   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:46.177978   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:46.178001   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:46.178017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:46.265260   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:46.265298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:48.814221   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:48.835695   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:48.835793   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:48.898391   70908 cri.go:89] found id: ""
	I0311 21:38:48.898418   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.898429   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:48.898437   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:48.898501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:48.972552   70908 cri.go:89] found id: ""
	I0311 21:38:48.972596   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.972607   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:48.972617   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:48.972684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:49.022346   70908 cri.go:89] found id: ""
	I0311 21:38:49.022371   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.022379   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:49.022384   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:49.022430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:49.078415   70908 cri.go:89] found id: ""
	I0311 21:38:49.078444   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.078455   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:49.078463   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:49.078526   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:49.119369   70908 cri.go:89] found id: ""
	I0311 21:38:49.119402   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.119412   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:49.119420   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:49.119497   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:49.169866   70908 cri.go:89] found id: ""
	I0311 21:38:49.169897   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.169908   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:49.169916   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:49.169978   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:49.223619   70908 cri.go:89] found id: ""
	I0311 21:38:49.223642   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.223650   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:49.223656   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:49.223704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:49.278499   70908 cri.go:89] found id: ""
	I0311 21:38:49.278531   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.278542   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:49.278551   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:49.278563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:49.294734   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.294760   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:49.390223   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:49.390252   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:49.390267   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:49.481214   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.481250   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:49.530285   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:49.530321   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:49.149825   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.648269   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.140832   70604 pod_ready.go:81] duration metric: took 4m0.000856291s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:53.140873   70604 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:38:53.140895   70604 pod_ready.go:38] duration metric: took 4m13.032115697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:53.140925   70604 kubeadm.go:591] duration metric: took 4m21.406945055s to restartPrimaryControlPlane
	W0311 21:38:53.140993   70604 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:53.141028   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:49.450738   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.950491   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.952209   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.955522   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:38:51.961814   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:38:51.963188   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:38:51.963209   70458 api_server.go:131] duration metric: took 4.186550701s to wait for apiserver health ...
	I0311 21:38:51.963218   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:38:51.963242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:51.963294   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.020708   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.020727   70458 cri.go:89] found id: ""
	I0311 21:38:52.020746   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:52.020815   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.026606   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.026668   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.072045   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.072063   70458 cri.go:89] found id: ""
	I0311 21:38:52.072071   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:52.072130   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.078592   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.078771   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.139445   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:52.139480   70458 cri.go:89] found id: ""
	I0311 21:38:52.139490   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:52.139548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.148641   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.148724   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.199332   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.199360   70458 cri.go:89] found id: ""
	I0311 21:38:52.199371   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:52.199433   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.207033   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.207096   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.267514   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.267540   70458 cri.go:89] found id: ""
	I0311 21:38:52.267549   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:52.267615   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.274048   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.274132   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.330293   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:52.330324   70458 cri.go:89] found id: ""
	I0311 21:38:52.330334   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:52.330395   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.336062   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.336143   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.381909   70458 cri.go:89] found id: ""
	I0311 21:38:52.381941   70458 logs.go:276] 0 containers: []
	W0311 21:38:52.381952   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.381960   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:52.382026   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:52.441879   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.441908   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.441919   70458 cri.go:89] found id: ""
	I0311 21:38:52.441928   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:52.441988   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.449288   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.456632   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.456664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.526327   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.526368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.545008   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.545035   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:52.699959   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:52.699995   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.762045   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:52.762079   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.828963   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:52.829005   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.874202   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:52.874237   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.916842   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:52.916872   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.969778   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.969807   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:53.365097   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:53.365147   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:53.446533   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:53.446576   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:53.500017   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:53.500043   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:53.572904   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:53.572954   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.087848   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:52.108284   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:52.108351   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.161648   70908 cri.go:89] found id: ""
	I0311 21:38:52.161680   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.161691   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:52.161698   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.161763   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.206552   70908 cri.go:89] found id: ""
	I0311 21:38:52.206577   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.206588   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:52.206596   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.206659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.253954   70908 cri.go:89] found id: ""
	I0311 21:38:52.253984   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.253996   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:52.254004   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.254068   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.302343   70908 cri.go:89] found id: ""
	I0311 21:38:52.302384   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.302396   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:52.302404   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.302472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.345581   70908 cri.go:89] found id: ""
	I0311 21:38:52.345608   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.345618   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:52.345624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.345683   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.392502   70908 cri.go:89] found id: ""
	I0311 21:38:52.392531   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.392542   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:52.392549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.392601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.447625   70908 cri.go:89] found id: ""
	I0311 21:38:52.447651   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.447661   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.447668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:52.447728   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:52.490965   70908 cri.go:89] found id: ""
	I0311 21:38:52.490994   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.491007   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:52.491019   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:52.491034   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:52.539604   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.539650   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.597735   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.597771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.617572   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.617610   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:52.706724   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:52.706753   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.706769   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.293550   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:55.313904   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:55.314005   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:55.368607   70908 cri.go:89] found id: ""
	I0311 21:38:55.368639   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.368647   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:55.368654   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:55.368714   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:55.434052   70908 cri.go:89] found id: ""
	I0311 21:38:55.434081   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.434092   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:55.434100   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:55.434189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:55.483532   70908 cri.go:89] found id: ""
	I0311 21:38:55.483562   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.483572   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:55.483579   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:55.483647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:55.528681   70908 cri.go:89] found id: ""
	I0311 21:38:55.528708   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.528721   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:55.528728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:55.528825   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:55.583143   70908 cri.go:89] found id: ""
	I0311 21:38:55.583167   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.583174   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:55.583179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:55.583240   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:55.636577   70908 cri.go:89] found id: ""
	I0311 21:38:55.636599   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.636607   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:55.636612   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:55.636670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:55.697268   70908 cri.go:89] found id: ""
	I0311 21:38:55.697295   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.697306   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:55.697314   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:55.697374   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:55.749272   70908 cri.go:89] found id: ""
	I0311 21:38:55.749302   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.749312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:55.749322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:55.749335   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.841581   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:55.841643   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:55.898537   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:55.898574   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:55.973278   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:55.973329   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:55.992958   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:55.992986   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:56.137313   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:38:56.137347   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.137354   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.137361   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.137366   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.137371   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.137375   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.137383   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.137390   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.137400   70458 system_pods.go:74] duration metric: took 4.174175629s to wait for pod list to return data ...
	I0311 21:38:56.137409   70458 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:38:56.140315   70458 default_sa.go:45] found service account: "default"
	I0311 21:38:56.140344   70458 default_sa.go:55] duration metric: took 2.92722ms for default service account to be created ...
	I0311 21:38:56.140356   70458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:38:56.146873   70458 system_pods.go:86] 8 kube-system pods found
	I0311 21:38:56.146912   70458 system_pods.go:89] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.146923   70458 system_pods.go:89] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.146932   70458 system_pods.go:89] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.146940   70458 system_pods.go:89] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.146945   70458 system_pods.go:89] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.146951   70458 system_pods.go:89] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.146960   70458 system_pods.go:89] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.146972   70458 system_pods.go:89] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.146983   70458 system_pods.go:126] duration metric: took 6.619737ms to wait for k8s-apps to be running ...
	I0311 21:38:56.146998   70458 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:38:56.147056   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:56.165354   70458 system_svc.go:56] duration metric: took 18.346754ms WaitForService to wait for kubelet
	I0311 21:38:56.165387   70458 kubeadm.go:576] duration metric: took 4m22.570894549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:38:56.165413   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:38:56.168819   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:38:56.168845   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:38:56.168856   70458 node_conditions.go:105] duration metric: took 3.437527ms to run NodePressure ...
	I0311 21:38:56.168868   70458 start.go:240] waiting for startup goroutines ...
	I0311 21:38:56.168875   70458 start.go:245] waiting for cluster config update ...
	I0311 21:38:56.168885   70458 start.go:254] writing updated cluster config ...
	I0311 21:38:56.169153   70458 ssh_runner.go:195] Run: rm -f paused
	I0311 21:38:56.225977   70458 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:38:56.228234   70458 out.go:177] * Done! kubectl is now configured to use "no-preload-324578" cluster and "default" namespace by default
	I0311 21:38:56.450729   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:58.450799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	W0311 21:38:56.084193   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:58.584354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:58.604767   70908 kubeadm.go:591] duration metric: took 4m4.440744932s to restartPrimaryControlPlane
	W0311 21:38:58.604844   70908 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:58.604872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:59.965834   70908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.36094005s)
	I0311 21:38:59.965906   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:59.982020   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:38:59.994794   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:00.007116   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:00.007138   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:00.007182   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:00.019744   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:00.019802   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:00.033311   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:00.045608   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:00.045685   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:00.059722   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.071140   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:00.071199   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.082635   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:00.093311   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:00.093374   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:00.104995   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:00.372164   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:00.950799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:03.450080   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:05.949899   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:07.950640   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:10.450583   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:12.949481   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:14.950496   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:16.951064   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:18.958165   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:21.450609   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:23.949791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:26.302837   70604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.161781704s)
	I0311 21:39:26.302921   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:26.319602   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:39:26.331483   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:26.343632   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:26.343658   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:26.343705   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:26.354863   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:26.354919   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:26.366087   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:26.377221   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:26.377282   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:26.389769   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.401201   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:26.401255   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.412357   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:26.423962   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:26.424035   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:26.436189   70604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:26.672030   70604 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:25.952857   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:28.449272   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:30.450630   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:32.450912   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:35.908605   70604 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:39:35.908656   70604 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:39:35.908751   70604 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:39:35.908846   70604 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:39:35.908967   70604 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:39:35.909026   70604 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:39:35.910690   70604 out.go:204]   - Generating certificates and keys ...
	I0311 21:39:35.910785   70604 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:39:35.910849   70604 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:39:35.910952   70604 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:39:35.911039   70604 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:39:35.911106   70604 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:39:35.911177   70604 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:39:35.911268   70604 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:39:35.911353   70604 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:39:35.911449   70604 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:39:35.911551   70604 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:39:35.911604   70604 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:39:35.911689   70604 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:39:35.911762   70604 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:39:35.911869   70604 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:39:35.911974   70604 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:39:35.912067   70604 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:39:35.912217   70604 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:39:35.912320   70604 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:39:35.914908   70604 out.go:204]   - Booting up control plane ...
	I0311 21:39:35.915026   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:39:35.915126   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:39:35.915216   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:39:35.915321   70604 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:39:35.915431   70604 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:39:35.915487   70604 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:39:35.915659   70604 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:39:35.915792   70604 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503325 seconds
	I0311 21:39:35.915925   70604 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:39:35.916039   70604 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:39:35.916091   70604 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:39:35.916314   70604 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-743937 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:39:35.916408   70604 kubeadm.go:309] [bootstrap-token] Using token: hxeoeg.f2scq51qa57vwzwt
	I0311 21:39:35.917880   70604 out.go:204]   - Configuring RBAC rules ...
	I0311 21:39:35.917995   70604 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:39:35.918093   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:39:35.918297   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:39:35.918490   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:39:35.918629   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:39:35.918745   70604 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:39:35.918907   70604 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:39:35.918974   70604 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:39:35.919031   70604 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:39:35.919048   70604 kubeadm.go:309] 
	I0311 21:39:35.919118   70604 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:39:35.919128   70604 kubeadm.go:309] 
	I0311 21:39:35.919225   70604 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:39:35.919236   70604 kubeadm.go:309] 
	I0311 21:39:35.919266   70604 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:39:35.919344   70604 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:39:35.919405   70604 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:39:35.919412   70604 kubeadm.go:309] 
	I0311 21:39:35.919461   70604 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:39:35.919467   70604 kubeadm.go:309] 
	I0311 21:39:35.919505   70604 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:39:35.919511   70604 kubeadm.go:309] 
	I0311 21:39:35.919553   70604 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:39:35.919640   70604 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:39:35.919727   70604 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:39:35.919736   70604 kubeadm.go:309] 
	I0311 21:39:35.919835   70604 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:39:35.919949   70604 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:39:35.919964   70604 kubeadm.go:309] 
	I0311 21:39:35.920071   70604 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920172   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:39:35.920193   70604 kubeadm.go:309] 	--control-plane 
	I0311 21:39:35.920199   70604 kubeadm.go:309] 
	I0311 21:39:35.920271   70604 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:39:35.920280   70604 kubeadm.go:309] 
	I0311 21:39:35.920349   70604 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920479   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:39:35.920507   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:39:35.920517   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:39:35.922125   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:39:35.923386   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:39:35.955828   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:39:36.065309   70604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:39:36.065389   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.065408   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-743937 minikube.k8s.io/updated_at=2024_03_11T21_39_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=embed-certs-743937 minikube.k8s.io/primary=true
	I0311 21:39:36.370945   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.370961   70604 ops.go:34] apiserver oom_adj: -16
	I0311 21:39:36.871194   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.371937   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.871974   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.371330   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.871791   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:34.949300   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:36.942990   70417 pod_ready.go:81] duration metric: took 4m0.000574155s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	E0311 21:39:36.943022   70417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:39:36.943043   70417 pod_ready.go:38] duration metric: took 4m12.043798271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:36.943093   70417 kubeadm.go:591] duration metric: took 4m20.121624644s to restartPrimaryControlPlane
	W0311 21:39:36.943155   70417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:39:36.943183   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:39:39.371531   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:39.872032   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.371717   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.871615   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.371577   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.871841   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.371050   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.871044   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.371446   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.871815   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.371243   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.872056   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.371993   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.871213   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.371397   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.871185   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.371541   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.871121   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.971855   70604 kubeadm.go:1106] duration metric: took 11.906533451s to wait for elevateKubeSystemPrivileges
	W0311 21:39:47.971895   70604 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:39:47.971902   70604 kubeadm.go:393] duration metric: took 5m16.305518086s to StartCluster
	I0311 21:39:47.971917   70604 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.972003   70604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:39:47.974339   70604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.974576   70604 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:39:47.976309   70604 out.go:177] * Verifying Kubernetes components...
	I0311 21:39:47.974638   70604 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:39:47.974819   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:39:47.977737   70604 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-743937"
	I0311 21:39:47.977746   70604 addons.go:69] Setting default-storageclass=true in profile "embed-certs-743937"
	I0311 21:39:47.977779   70604 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-743937"
	W0311 21:39:47.977790   70604 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:39:47.977815   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.977740   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:39:47.977779   70604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-743937"
	I0311 21:39:47.977750   70604 addons.go:69] Setting metrics-server=true in profile "embed-certs-743937"
	I0311 21:39:47.977943   70604 addons.go:234] Setting addon metrics-server=true in "embed-certs-743937"
	W0311 21:39:47.977957   70604 addons.go:243] addon metrics-server should already be in state true
	I0311 21:39:47.977985   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978270   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978275   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978419   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978449   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.994019   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I0311 21:39:47.994131   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0311 21:39:47.994484   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994514   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994964   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.994983   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995128   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.995143   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995288   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0311 21:39:47.995437   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995506   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995583   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:47.996051   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.996073   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.996516   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.996999   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.997024   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.997383   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.997834   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.997858   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.999381   70604 addons.go:234] Setting addon default-storageclass=true in "embed-certs-743937"
	W0311 21:39:47.999406   70604 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:39:47.999432   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.999794   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.999823   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.012063   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0311 21:39:48.012470   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.012899   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.012923   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.013267   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.013334   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0311 21:39:48.013484   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.013767   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.014259   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.014279   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.014556   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.014752   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.015486   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.017650   70604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:39:48.016591   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.019717   70604 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.019736   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:39:48.019758   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.021823   70604 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:39:48.023083   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:39:48.023095   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:39:48.023108   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.023306   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.023589   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0311 21:39:48.023916   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.023937   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.024255   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.024412   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.024533   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.024653   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.025517   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.025955   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.025967   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.026292   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.027365   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.027654   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:48.027692   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.027909   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.027965   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.028188   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.028369   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.028496   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.028603   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.048933   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0311 21:39:48.049338   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.049918   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.049929   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.050342   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.050502   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.052274   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.052523   70604 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.052537   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:39:48.052554   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.055438   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.055864   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.055881   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.056156   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.056334   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.056495   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.056608   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.175402   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:39:48.196199   70604 node_ready.go:35] waiting up to 6m0s for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215911   70604 node_ready.go:49] node "embed-certs-743937" has status "Ready":"True"
	I0311 21:39:48.215935   70604 node_ready.go:38] duration metric: took 19.701474ms for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215945   70604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.223525   70604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228887   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.228907   70604 pod_ready.go:81] duration metric: took 5.35597ms for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228917   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233811   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.233828   70604 pod_ready.go:81] duration metric: took 4.904721ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233839   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241831   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.241848   70604 pod_ready.go:81] duration metric: took 8.002663ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241857   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247609   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.247633   70604 pod_ready.go:81] duration metric: took 5.767693ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247641   70604 pod_ready.go:38] duration metric: took 31.680305ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.247656   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:39:48.247704   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:39:48.270201   70604 api_server.go:72] duration metric: took 295.596568ms to wait for apiserver process to appear ...
	I0311 21:39:48.270224   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:39:48.270242   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:39:48.277642   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:39:48.280487   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:39:48.280505   70604 api_server.go:131] duration metric: took 10.273204ms to wait for apiserver health ...
	I0311 21:39:48.280514   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:39:48.343718   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.346848   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:39:48.346864   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:39:48.400878   70604 system_pods.go:59] 4 kube-system pods found
	I0311 21:39:48.400907   70604 system_pods.go:61] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.400913   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.400919   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.400923   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.400931   70604 system_pods.go:74] duration metric: took 120.410888ms to wait for pod list to return data ...
	I0311 21:39:48.400940   70604 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:39:48.401062   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:39:48.401083   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:39:48.406115   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.492018   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.492042   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:39:48.581187   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.602016   70604 default_sa.go:45] found service account: "default"
	I0311 21:39:48.602046   70604 default_sa.go:55] duration metric: took 201.097662ms for default service account to be created ...
	I0311 21:39:48.602056   70604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:39:48.862115   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:48.862148   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending
	I0311 21:39:48.862155   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending
	I0311 21:39:48.862159   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.862164   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.862169   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.862176   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:48.862180   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.862199   70604 retry.go:31] will retry after 266.08114ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.139648   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.139675   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139682   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139689   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.139694   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.139700   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.139706   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.139710   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.139724   70604 retry.go:31] will retry after 293.420416ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.476384   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.476411   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476418   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476423   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.476429   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.476433   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.476438   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.476442   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.476456   70604 retry.go:31] will retry after 439.10065ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.927298   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.927337   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927348   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927357   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.927366   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.927373   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.927381   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.927389   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.927411   70604 retry.go:31] will retry after 396.604462ms: missing components: kube-dns, kube-proxy
	I0311 21:39:50.092631   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68647s)
	I0311 21:39:50.092698   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.092718   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093147   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093200   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093223   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093241   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093280   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.749522465s)
	I0311 21:39:50.093321   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093336   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093507   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093529   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093746   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.093759   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093773   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093797   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093805   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.094040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.094041   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.094067   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.111807   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.111831   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.112109   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.112127   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.112132   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.291598   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.710367476s)
	I0311 21:39:50.291651   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.291671   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292020   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292036   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292044   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.292050   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292287   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.292328   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292352   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292367   70604 addons.go:470] Verifying addon metrics-server=true in "embed-certs-743937"
	I0311 21:39:50.294192   70604 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:39:50.295405   70604 addons.go:505] duration metric: took 2.320766016s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:39:50.339623   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:50.339651   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339658   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339665   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:50.339671   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:50.339677   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:50.339682   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:50.339688   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:50.339695   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:50.339704   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:50.339728   70604 retry.go:31] will retry after 674.573171ms: missing components: kube-dns
	I0311 21:39:51.021666   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.021704   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021716   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021723   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.021731   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.021743   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.021754   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.021760   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.021772   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.021786   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:51.021805   70604 retry.go:31] will retry after 716.470399ms: missing components: kube-dns
	I0311 21:39:51.745786   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.745818   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745829   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745840   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.745849   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.745855   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.745861   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.745867   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.745876   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.745886   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:51.745904   70604 retry.go:31] will retry after 873.920018ms: missing components: kube-dns
	I0311 21:39:52.627896   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:52.627922   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Running
	I0311 21:39:52.627927   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Running
	I0311 21:39:52.627932   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:52.627936   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:52.627941   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:52.627944   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:52.627948   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:52.627954   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:52.627958   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:52.627966   70604 system_pods.go:126] duration metric: took 4.025903884s to wait for k8s-apps to be running ...
	I0311 21:39:52.627976   70604 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:39:52.628017   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:52.643356   70604 system_svc.go:56] duration metric: took 15.371853ms WaitForService to wait for kubelet
	I0311 21:39:52.643378   70604 kubeadm.go:576] duration metric: took 4.668777182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:39:52.643394   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:39:52.646844   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:39:52.646862   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:39:52.646871   70604 node_conditions.go:105] duration metric: took 3.47245ms to run NodePressure ...
	I0311 21:39:52.646881   70604 start.go:240] waiting for startup goroutines ...
	I0311 21:39:52.646891   70604 start.go:245] waiting for cluster config update ...
	I0311 21:39:52.646904   70604 start.go:254] writing updated cluster config ...
	I0311 21:39:52.647207   70604 ssh_runner.go:195] Run: rm -f paused
	I0311 21:39:52.697687   70604 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:39:52.699641   70604 out.go:177] * Done! kubectl is now configured to use "embed-certs-743937" cluster and "default" namespace by default
	I0311 21:40:09.411155   70417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.467938624s)
	I0311 21:40:09.411245   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:09.429951   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:40:09.442265   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:09.453883   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:09.453899   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:09.453934   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:40:09.465106   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:09.465161   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:09.476155   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:40:09.487366   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:09.487413   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:09.497877   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.508056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:09.508096   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.518709   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:40:09.529005   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:09.529039   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:09.539755   70417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:09.601265   70417 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:40:09.601399   70417 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:09.771387   70417 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:09.771548   70417 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:09.771653   70417 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:10.016610   70417 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:10.018526   70417 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:10.018613   70417 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:10.018670   70417 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:10.018752   70417 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:10.018830   70417 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:10.018926   70417 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:10.019019   70417 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:10.019436   70417 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:10.019924   70417 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:10.020435   70417 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:10.020949   70417 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:10.021470   70417 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:10.021550   70417 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:10.087827   70417 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:10.326702   70417 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:10.515476   70417 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:10.585573   70417 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:10.586277   70417 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:10.588784   70417 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:10.590786   70417 out.go:204]   - Booting up control plane ...
	I0311 21:40:10.590969   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:10.591080   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:10.591164   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:10.613086   70417 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:10.613187   70417 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:10.613224   70417 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:10.753737   70417 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:17.258016   70417 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503151 seconds
	I0311 21:40:17.258170   70417 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:40:17.276142   70417 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:40:17.805116   70417 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:40:17.805383   70417 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-766430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:40:18.323836   70417 kubeadm.go:309] [bootstrap-token] Using token: 9sjslg.sf5b1bfk3wp77z35
	I0311 21:40:18.325382   70417 out.go:204]   - Configuring RBAC rules ...
	I0311 21:40:18.325478   70417 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:40:18.331585   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:40:18.344341   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:40:18.348362   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:40:18.352181   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:40:18.363299   70417 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:40:18.377835   70417 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:40:18.612013   70417 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:40:18.755215   70417 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:40:18.755235   70417 kubeadm.go:309] 
	I0311 21:40:18.755300   70417 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:40:18.755314   70417 kubeadm.go:309] 
	I0311 21:40:18.755434   70417 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:40:18.755460   70417 kubeadm.go:309] 
	I0311 21:40:18.755490   70417 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:40:18.755571   70417 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:40:18.755636   70417 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:40:18.755647   70417 kubeadm.go:309] 
	I0311 21:40:18.755721   70417 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:40:18.755731   70417 kubeadm.go:309] 
	I0311 21:40:18.755794   70417 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:40:18.755804   70417 kubeadm.go:309] 
	I0311 21:40:18.755876   70417 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:40:18.755941   70417 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:40:18.756010   70417 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:40:18.756029   70417 kubeadm.go:309] 
	I0311 21:40:18.756152   70417 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:40:18.756267   70417 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:40:18.756277   70417 kubeadm.go:309] 
	I0311 21:40:18.756391   70417 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.756533   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:40:18.756578   70417 kubeadm.go:309] 	--control-plane 
	I0311 21:40:18.756585   70417 kubeadm.go:309] 
	I0311 21:40:18.756695   70417 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:40:18.756706   70417 kubeadm.go:309] 
	I0311 21:40:18.756844   70417 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.757021   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:40:18.759444   70417 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:40:18.759474   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:40:18.759489   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:40:18.761354   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:40:18.762676   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:40:18.793496   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:40:18.840426   70417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-766430 minikube.k8s.io/updated_at=2024_03_11T21_40_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=default-k8s-diff-port-766430 minikube.k8s.io/primary=true
	I0311 21:40:19.150012   70417 ops.go:34] apiserver oom_adj: -16
	I0311 21:40:19.150129   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:19.650947   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.150969   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.650687   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.150849   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.650356   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.150737   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.650225   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.150390   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.650650   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.151081   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.650689   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.150428   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.650265   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.150198   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.650610   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.150325   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.650794   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.150855   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.650819   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.150345   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.650746   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.150910   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.650742   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.790472   70417 kubeadm.go:1106] duration metric: took 11.95003413s to wait for elevateKubeSystemPrivileges
	W0311 21:40:30.790506   70417 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:40:30.790513   70417 kubeadm.go:393] duration metric: took 5m14.024392605s to StartCluster
	I0311 21:40:30.790527   70417 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.790630   70417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:40:30.792582   70417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.792843   70417 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:40:30.794425   70417 out.go:177] * Verifying Kubernetes components...
	I0311 21:40:30.792920   70417 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:40:30.793051   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:40:30.796119   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:40:30.796129   70417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796160   70417 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796171   70417 addons.go:243] addon metrics-server should already be in state true
	I0311 21:40:30.796197   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796121   70417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796127   70417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796237   70417 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796253   70417 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:40:30.796268   70417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766430"
	I0311 21:40:30.796278   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796663   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796694   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796699   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796722   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796777   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796807   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.812156   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0311 21:40:30.812601   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.813108   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.813138   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.813532   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.813995   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.814031   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.816427   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0311 21:40:30.816626   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0311 21:40:30.816863   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817015   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817365   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817385   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817532   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817557   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817905   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.817908   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.818696   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.819070   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.819100   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.822839   70417 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.822858   70417 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:40:30.822885   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.823188   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.823202   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.834007   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0311 21:40:30.834521   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.835017   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.835033   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.835418   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.835620   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.837838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.839548   70417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:40:30.838397   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0311 21:40:30.840244   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0311 21:40:30.840869   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:40:30.840885   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:40:30.840904   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.841295   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841345   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841877   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.841894   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.841994   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.842012   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.842246   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.842448   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842960   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.842985   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.844184   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.844406   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.845769   70417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:40:30.847105   70417 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:30.844838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.847124   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:40:30.847142   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.845110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.847151   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.847302   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.847424   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.847550   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.849856   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850205   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.850232   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.850575   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.850697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.850835   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.861464   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0311 21:40:30.861799   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.862252   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.862271   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.862655   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.862818   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.864692   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.864956   70417 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:30.864978   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:40:30.864996   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.867548   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.867980   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.868013   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.868140   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.868300   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.868433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.868558   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:31.037958   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:40:31.081173   70417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103697   70417 node_ready.go:49] node "default-k8s-diff-port-766430" has status "Ready":"True"
	I0311 21:40:31.103717   70417 node_ready.go:38] duration metric: took 22.519334ms for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103726   70417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:31.129595   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:31.184749   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:40:31.184771   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:40:31.194340   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:31.213567   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:40:31.213589   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:40:31.255647   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.255667   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:40:31.284917   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.309356   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:32.792293   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.597920266s)
	I0311 21:40:32.792337   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.792625   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.792686   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.792703   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.792714   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792724   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.793060   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.793086   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.793137   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811230   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.811254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.811583   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811587   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.811606   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.156126   70417 pod_ready.go:92] pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.156148   70417 pod_ready.go:81] duration metric: took 2.026531002s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.156156   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174226   70417 pod_ready.go:92] pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.174248   70417 pod_ready.go:81] duration metric: took 18.0858ms for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174257   70417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186296   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.186329   70417 pod_ready.go:81] duration metric: took 12.06396ms for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186344   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195902   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.195930   70417 pod_ready.go:81] duration metric: took 9.577334ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195945   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203134   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.203160   70417 pod_ready.go:81] duration metric: took 7.205172ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203174   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.449290   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.164324973s)
	I0311 21:40:33.449341   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.139948099s)
	I0311 21:40:33.449374   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449392   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449346   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449461   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449662   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449678   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449688   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.449795   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449810   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449823   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449836   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449886   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449905   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450213   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450256   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.450263   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.450272   70417 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-766430"
	I0311 21:40:33.453444   70417 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0311 21:40:33.454670   70417 addons.go:505] duration metric: took 2.661756652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0311 21:40:33.534893   70417 pod_ready.go:92] pod "kube-proxy-t4fwc" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.534915   70417 pod_ready.go:81] duration metric: took 331.733613ms for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.534924   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933950   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.933973   70417 pod_ready.go:81] duration metric: took 399.042085ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933981   70417 pod_ready.go:38] duration metric: took 2.830245804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:33.933994   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:40:33.934053   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:40:33.953607   70417 api_server.go:72] duration metric: took 3.160728268s to wait for apiserver process to appear ...
	I0311 21:40:33.953629   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:40:33.953650   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:40:33.959064   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:40:33.960101   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:40:33.960125   70417 api_server.go:131] duration metric: took 6.489682ms to wait for apiserver health ...
	I0311 21:40:33.960135   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:40:34.137026   70417 system_pods.go:59] 9 kube-system pods found
	I0311 21:40:34.137061   70417 system_pods.go:61] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.137079   70417 system_pods.go:61] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.137086   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.137093   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.137100   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.137105   70417 system_pods.go:61] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.137111   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.137121   70417 system_pods.go:61] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.137133   70417 system_pods.go:61] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.137147   70417 system_pods.go:74] duration metric: took 177.004603ms to wait for pod list to return data ...
	I0311 21:40:34.137201   70417 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:40:34.333563   70417 default_sa.go:45] found service account: "default"
	I0311 21:40:34.333589   70417 default_sa.go:55] duration metric: took 196.374123ms for default service account to be created ...
	I0311 21:40:34.333600   70417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:40:34.537376   70417 system_pods.go:86] 9 kube-system pods found
	I0311 21:40:34.537401   70417 system_pods.go:89] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.537406   70417 system_pods.go:89] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.537411   70417 system_pods.go:89] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.537415   70417 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.537420   70417 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.537423   70417 system_pods.go:89] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.537427   70417 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.537433   70417 system_pods.go:89] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.537438   70417 system_pods.go:89] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.537447   70417 system_pods.go:126] duration metric: took 203.840784ms to wait for k8s-apps to be running ...
	I0311 21:40:34.537453   70417 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:40:34.537493   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:34.555483   70417 system_svc.go:56] duration metric: took 18.021595ms WaitForService to wait for kubelet
	I0311 21:40:34.555511   70417 kubeadm.go:576] duration metric: took 3.76263503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:40:34.555534   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:40:34.735214   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:40:34.735238   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:40:34.735248   70417 node_conditions.go:105] duration metric: took 179.707447ms to run NodePressure ...
	I0311 21:40:34.735258   70417 start.go:240] waiting for startup goroutines ...
	I0311 21:40:34.735264   70417 start.go:245] waiting for cluster config update ...
	I0311 21:40:34.735274   70417 start.go:254] writing updated cluster config ...
	I0311 21:40:34.735539   70417 ssh_runner.go:195] Run: rm -f paused
	I0311 21:40:34.782710   70417 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:40:34.784627   70417 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-766430" cluster and "default" namespace by default
	I0311 21:40:56.380462   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:40:56.380539   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:40:56.382217   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:56.382264   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:56.382349   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:56.382450   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:56.382619   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:56.382712   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:56.384498   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:56.384579   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:56.384636   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:56.384766   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:56.384863   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:56.384967   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:56.385037   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:56.385139   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:56.385208   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:56.385281   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:56.385357   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:56.385408   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:56.385492   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:56.385567   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:56.385644   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:56.385769   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:56.385855   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:56.385962   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:56.386053   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:56.386104   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:56.386184   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:56.387594   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:56.387671   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:56.387738   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:56.387811   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:56.387914   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:56.388107   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:56.388182   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:40:56.388297   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388522   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388614   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388844   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388914   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389074   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389131   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389314   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389405   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389594   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389603   70908 kubeadm.go:309] 
	I0311 21:40:56.389653   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:40:56.389720   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:40:56.389732   70908 kubeadm.go:309] 
	I0311 21:40:56.389779   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:40:56.389811   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:40:56.389924   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:40:56.389933   70908 kubeadm.go:309] 
	I0311 21:40:56.390058   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:40:56.390109   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:40:56.390150   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:40:56.390159   70908 kubeadm.go:309] 
	I0311 21:40:56.390299   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:40:56.390395   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:40:56.390409   70908 kubeadm.go:309] 
	I0311 21:40:56.390512   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:40:56.390603   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:40:56.390702   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:40:56.390803   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:40:56.390833   70908 kubeadm.go:309] 
	W0311 21:40:56.390936   70908 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:40:56.390995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:40:56.941058   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:56.958276   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:56.970464   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:56.970493   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:56.970552   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:40:56.983314   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:56.983372   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:56.993791   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:40:57.004040   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:57.004098   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:57.014471   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.024751   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:57.024805   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.035389   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:40:57.045511   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:57.045556   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:57.056774   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:57.140620   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:57.140789   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:57.310076   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:57.310193   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:57.310280   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:57.506834   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:57.509261   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:57.509362   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:57.509446   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:57.509576   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:57.509669   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:57.509765   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:57.509839   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:57.509949   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:57.510004   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:57.510109   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:57.510231   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:57.510274   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:57.510361   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:57.585562   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:57.644460   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:57.784382   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:57.848952   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:57.867302   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:57.867791   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:57.867864   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:58.036523   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:58.039051   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:58.039176   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:58.054234   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:58.055548   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:58.057378   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:58.060167   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:41:38.062360   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:41:38.062886   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:38.063137   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:43.063592   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:43.063788   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:53.064505   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:53.064773   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:13.065744   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:13.065995   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.066718   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:53.067030   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.067070   70908 kubeadm.go:309] 
	I0311 21:42:53.067135   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:42:53.067191   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:42:53.067203   70908 kubeadm.go:309] 
	I0311 21:42:53.067259   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:42:53.067318   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:42:53.067456   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:42:53.067466   70908 kubeadm.go:309] 
	I0311 21:42:53.067590   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:42:53.067650   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:42:53.067724   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:42:53.067735   70908 kubeadm.go:309] 
	I0311 21:42:53.067889   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:42:53.068021   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:42:53.068036   70908 kubeadm.go:309] 
	I0311 21:42:53.068169   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:42:53.068297   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:42:53.068412   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:42:53.068512   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:42:53.068523   70908 kubeadm.go:309] 
	I0311 21:42:53.069455   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:42:53.069572   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:42:53.069682   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:42:53.069775   70908 kubeadm.go:393] duration metric: took 7m58.960224884s to StartCluster
	I0311 21:42:53.069833   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:42:53.069899   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:42:53.120459   70908 cri.go:89] found id: ""
	I0311 21:42:53.120486   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.120497   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:42:53.120505   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:42:53.120564   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:42:53.159639   70908 cri.go:89] found id: ""
	I0311 21:42:53.159667   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.159676   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:42:53.159682   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:42:53.159738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:42:53.199584   70908 cri.go:89] found id: ""
	I0311 21:42:53.199607   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.199614   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:42:53.199619   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:42:53.199676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:42:53.238868   70908 cri.go:89] found id: ""
	I0311 21:42:53.238901   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.238908   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:42:53.238917   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:42:53.238963   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:42:53.282172   70908 cri.go:89] found id: ""
	I0311 21:42:53.282205   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.282216   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:42:53.282225   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:42:53.282278   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:42:53.318450   70908 cri.go:89] found id: ""
	I0311 21:42:53.318481   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.318491   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:42:53.318499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:42:53.318559   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:42:53.360887   70908 cri.go:89] found id: ""
	I0311 21:42:53.360913   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.360923   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:42:53.360930   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:42:53.361027   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:42:53.414181   70908 cri.go:89] found id: ""
	I0311 21:42:53.414209   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.414220   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:42:53.414232   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:42:53.414247   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:42:53.478658   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:42:53.478689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:42:53.494577   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:42:53.494604   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:42:53.586460   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:42:53.586483   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:42:53.586500   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:42:53.697218   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:42:53.697251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0311 21:42:53.746291   70908 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:42:53.746336   70908 out.go:239] * 
	W0311 21:42:53.746388   70908 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.746409   70908 out.go:239] * 
	W0311 21:42:53.747362   70908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:42:53.750888   70908 out.go:177] 
	W0311 21:42:53.752146   70908 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.752211   70908 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:42:53.752239   70908 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:42:53.753832   70908 out.go:177] 
	
	
	==> CRI-O <==
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.273497554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193678273470488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee83efae-c8c4-4b4d-8086-be870bf34eb5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.273995129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=961f06cf-89e9-4848-99c4-65ad18cb7ac7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.274089237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=961f06cf-89e9-4848-99c4-65ad18cb7ac7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.274962013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710192900024670344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0601a54c86517ac45bde833e5034231ad39b0a781d319e3c7a96461a91a5407a,PodSandboxId:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710192877991276879,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 82395f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371,PodSandboxId:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710192876858908409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 26f79f4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db,PodSandboxId:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710192869284267856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bd
c5-4dea57847900,},Annotations:map[string]string{io.kubernetes.container.hash: ff981d25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710192869223965401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a
6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a,PodSandboxId:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710192864589640678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75fc9de8ead6fb204,},Annotations:map[string]string{io.kuber
netes.container.hash: d7d87a8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0,PodSandboxId:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710192864592952670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c,PodSandboxId:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710192864521508756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902,PodSandboxId:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710192864494375201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 401348b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=961f06cf-89e9-4848-99c4-65ad18cb7ac7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.323243977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e306c36-d720-414b-a4ce-56e00cbcf918 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.323346389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e306c36-d720-414b-a4ce-56e00cbcf918 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.325559886Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=783f98e7-16af-471c-83e7-4985e941499b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.325996574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193678325972812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=783f98e7-16af-471c-83e7-4985e941499b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.327022764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59382be4-0cd3-45fa-b17e-5e324ee66b53 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.327105271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59382be4-0cd3-45fa-b17e-5e324ee66b53 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.327342144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710192900024670344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0601a54c86517ac45bde833e5034231ad39b0a781d319e3c7a96461a91a5407a,PodSandboxId:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710192877991276879,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 82395f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371,PodSandboxId:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710192876858908409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 26f79f4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db,PodSandboxId:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710192869284267856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bd
c5-4dea57847900,},Annotations:map[string]string{io.kubernetes.container.hash: ff981d25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710192869223965401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a
6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a,PodSandboxId:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710192864589640678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75fc9de8ead6fb204,},Annotations:map[string]string{io.kuber
netes.container.hash: d7d87a8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0,PodSandboxId:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710192864592952670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c,PodSandboxId:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710192864521508756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902,PodSandboxId:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710192864494375201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 401348b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59382be4-0cd3-45fa-b17e-5e324ee66b53 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.371348369Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f4daa08-0ee6-42b5-958d-b7b48ec9e0b9 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.371465403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f4daa08-0ee6-42b5-958d-b7b48ec9e0b9 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.372973349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbe5adfb-adab-4b34-9e19-693faab5ed00 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.373320109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193678373300296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbe5adfb-adab-4b34-9e19-693faab5ed00 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.373797016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b71c8f95-bab4-4299-9d10-0df2b22ef20c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.373884572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b71c8f95-bab4-4299-9d10-0df2b22ef20c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.374157060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710192900024670344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0601a54c86517ac45bde833e5034231ad39b0a781d319e3c7a96461a91a5407a,PodSandboxId:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710192877991276879,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 82395f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371,PodSandboxId:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710192876858908409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 26f79f4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db,PodSandboxId:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710192869284267856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bd
c5-4dea57847900,},Annotations:map[string]string{io.kubernetes.container.hash: ff981d25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710192869223965401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a
6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a,PodSandboxId:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710192864589640678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75fc9de8ead6fb204,},Annotations:map[string]string{io.kuber
netes.container.hash: d7d87a8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0,PodSandboxId:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710192864592952670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c,PodSandboxId:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710192864521508756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902,PodSandboxId:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710192864494375201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 401348b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b71c8f95-bab4-4299-9d10-0df2b22ef20c name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.419831343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1a5dc51-9d95-443f-8f9f-b92f3002cb61 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.419907257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1a5dc51-9d95-443f-8f9f-b92f3002cb61 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.421172742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=623ebcc4-4faf-4761-902b-b32d05e0c8c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.422853454Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193678422829185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=623ebcc4-4faf-4761-902b-b32d05e0c8c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.423610275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8582c073-5e49-456b-bf90-ed366fa92acd name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.423665745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8582c073-5e49-456b-bf90-ed366fa92acd name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:47:58 no-preload-324578 crio[688]: time="2024-03-11 21:47:58.423930643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710192900024670344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0601a54c86517ac45bde833e5034231ad39b0a781d319e3c7a96461a91a5407a,PodSandboxId:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710192877991276879,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 82395f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371,PodSandboxId:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710192876858908409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 26f79f4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db,PodSandboxId:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710192869284267856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bd
c5-4dea57847900,},Annotations:map[string]string{io.kubernetes.container.hash: ff981d25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710192869223965401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a
6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a,PodSandboxId:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710192864589640678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75fc9de8ead6fb204,},Annotations:map[string]string{io.kuber
netes.container.hash: d7d87a8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0,PodSandboxId:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710192864592952670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c,PodSandboxId:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710192864521508756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902,PodSandboxId:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710192864494375201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 401348b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8582c073-5e49-456b-bf90-ed366fa92acd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	21d8b522dbe03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   98e0753deae41       storage-provisioner
	0601a54c86517       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   00f9c2c2c24a2       busybox
	47a3cc73ba85a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   17a6c558fdd05       coredns-76f75df574-s6lsb
	c4b1f09c4c07d       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   6c311e64040da       kube-proxy-rmz4b
	8c5aec8c42b97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   98e0753deae41       storage-provisioner
	afcbb2dc1ded0       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   fc67615229787       kube-scheduler-no-preload-324578
	c0cb4bf3e770c       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   ab96f9a415c1d       etcd-no-preload-324578
	349dc13986ab3       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   9660842d3b13a       kube-controller-manager-no-preload-324578
	1ed4ff4bec8a1       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   36c029e61ceaa       kube-apiserver-no-preload-324578
	
	
	==> coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36589 - 27227 "HINFO IN 7298603871246463141.566043023039465393. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006403542s
	
	
	==> describe nodes <==
	Name:               no-preload-324578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-324578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=no-preload-324578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T21_25_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:25:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-324578
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:47:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:45:12 +0000   Mon, 11 Mar 2024 21:25:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:45:12 +0000   Mon, 11 Mar 2024 21:25:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:45:12 +0000   Mon, 11 Mar 2024 21:25:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:45:12 +0000   Mon, 11 Mar 2024 21:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    no-preload-324578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb451091906a45f09624844ec4bffca5
	  System UUID:                eb451091-906a-45f0-9624-844ec4bffca5
	  Boot ID:                    4581dfec-8b49-4d5c-ae2b-764bbaa7967c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-76f75df574-s6lsb                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-324578                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-324578             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-324578    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-rmz4b                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-324578             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-nv4gd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-324578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-324578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-324578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-324578 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-324578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-324578 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                22m                kubelet          Node no-preload-324578 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-324578 event: Registered Node no-preload-324578 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-324578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-324578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-324578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-324578 event: Registered Node no-preload-324578 in Controller
	
	
	==> dmesg <==
	[Mar11 21:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053580] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045063] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.537957] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.354207] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.698590] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar11 21:34] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069135] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.216244] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.115661] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.252298] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[ +17.048095] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.058759] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.751935] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +5.632867] kauditd_printk_skb: 100 callbacks suppressed
	[  +4.553775] systemd-fstab-generator[1925]: Ignoring "noauto" option for root device
	[  +2.955658] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.901502] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] <==
	{"level":"warn","ts":"2024-03-11T21:35:17.474893Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"357.550141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d\" ","response":"range_response_count:1 size:940"}
	{"level":"info","ts":"2024-03-11T21:35:17.474967Z","caller":"traceutil/trace.go:171","msg":"trace[950422552] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d; range_end:; response_count:1; response_revision:654; }","duration":"357.631626ms","start":"2024-03-11T21:35:17.117302Z","end":"2024-03-11T21:35:17.474957Z","steps":["trace[950422552] 'agreement among raft nodes before linearized reading'  (duration: 357.518014ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:17.475002Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.117208Z","time spent":"357.786266ms","remote":"127.0.0.1:51136","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":962,"request content":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d\" "}
	{"level":"warn","ts":"2024-03-11T21:35:17.475133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"468.331158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-03-11T21:35:17.475199Z","caller":"traceutil/trace.go:171","msg":"trace[1116546657] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd; range_end:; response_count:1; response_revision:654; }","duration":"468.394037ms","start":"2024-03-11T21:35:17.006795Z","end":"2024-03-11T21:35:17.475189Z","steps":["trace[1116546657] 'agreement among raft nodes before linearized reading'  (duration: 468.311485ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:17.475238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.006781Z","time spent":"468.44842ms","remote":"127.0.0.1:51246","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4258,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd\" "}
	{"level":"warn","ts":"2024-03-11T21:35:17.734615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.585935ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2618718042736031601 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d\" mod_revision:633 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d\" value_size:830 lease:2618718042736031120 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-11T21:35:17.734831Z","caller":"traceutil/trace.go:171","msg":"trace[430309502] linearizableReadLoop","detail":"{readStateIndex:708; appliedIndex:707; }","duration":"250.615585ms","start":"2024-03-11T21:35:17.4842Z","end":"2024-03-11T21:35:17.734816Z","steps":["trace[430309502] 'read index received'  (duration: 120.717511ms)","trace[430309502] 'applied index is now lower than readState.Index'  (duration: 129.896463ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T21:35:17.734908Z","caller":"traceutil/trace.go:171","msg":"trace[1428849145] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"253.823327ms","start":"2024-03-11T21:35:17.481075Z","end":"2024-03-11T21:35:17.734899Z","steps":["trace[1428849145] 'process raft request'  (duration: 123.884872ms)","trace[1428849145] 'compare'  (duration: 129.427399ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T21:35:17.735139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.944768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-324578\" ","response":"range_response_count:1 size:4692"}
	{"level":"info","ts":"2024-03-11T21:35:17.735195Z","caller":"traceutil/trace.go:171","msg":"trace[1597271578] range","detail":"{range_begin:/registry/minions/no-preload-324578; range_end:; response_count:1; response_revision:655; }","duration":"251.005454ms","start":"2024-03-11T21:35:17.484182Z","end":"2024-03-11T21:35:17.735187Z","steps":["trace[1597271578] 'agreement among raft nodes before linearized reading'  (duration: 250.884018ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:17.735334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.42753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-03-11T21:35:17.735383Z","caller":"traceutil/trace.go:171","msg":"trace[2015197557] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:655; }","duration":"177.477099ms","start":"2024-03-11T21:35:17.557899Z","end":"2024-03-11T21:35:17.735376Z","steps":["trace[2015197557] 'agreement among raft nodes before linearized reading'  (duration: 177.405929ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:18.046145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.618387ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2618718042736031606 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" mod_revision:634 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" value_size:668 lease:2618718042736031120 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-11T21:35:18.046452Z","caller":"traceutil/trace.go:171","msg":"trace[797991262] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"301.981313ms","start":"2024-03-11T21:35:17.74446Z","end":"2024-03-11T21:35:18.046441Z","steps":["trace[797991262] 'process raft request'  (duration: 301.912171ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:18.046558Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.744448Z","time spent":"302.073682ms","remote":"127.0.0.1:51226","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:484 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-11T21:35:18.046565Z","caller":"traceutil/trace.go:171","msg":"trace[2100089755] linearizableReadLoop","detail":"{readStateIndex:709; appliedIndex:708; }","duration":"306.087084ms","start":"2024-03-11T21:35:17.740465Z","end":"2024-03-11T21:35:18.046552Z","steps":["trace[2100089755] 'read index received'  (duration: 119.933961ms)","trace[2100089755] 'applied index is now lower than readState.Index'  (duration: 186.151718ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T21:35:18.046773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.31703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-03-11T21:35:18.046824Z","caller":"traceutil/trace.go:171","msg":"trace[377667928] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd; range_end:; response_count:1; response_revision:657; }","duration":"306.37314ms","start":"2024-03-11T21:35:17.740443Z","end":"2024-03-11T21:35:18.046817Z","steps":["trace[377667928] 'agreement among raft nodes before linearized reading'  (duration: 306.203485ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:18.046846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.740433Z","time spent":"306.40709ms","remote":"127.0.0.1:51246","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4258,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd\" "}
	{"level":"info","ts":"2024-03-11T21:35:18.046989Z","caller":"traceutil/trace.go:171","msg":"trace[1197202910] transaction","detail":"{read_only:false; response_revision:656; number_of_response:1; }","duration":"306.653787ms","start":"2024-03-11T21:35:17.740325Z","end":"2024-03-11T21:35:18.046979Z","steps":["trace[1197202910] 'process raft request'  (duration: 120.067487ms)","trace[1197202910] 'compare'  (duration: 185.413109ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T21:35:18.047067Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.740308Z","time spent":"306.724341ms","remote":"127.0.0.1:51136","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":763,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" mod_revision:634 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" value_size:668 lease:2618718042736031120 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" > >"}
	{"level":"info","ts":"2024-03-11T21:44:26.100335Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":874}
	{"level":"info","ts":"2024-03-11T21:44:26.103328Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":874,"took":"2.544833ms","hash":1178115223}
	{"level":"info","ts":"2024-03-11T21:44:26.103387Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1178115223,"revision":874,"compact-revision":-1}
	
	
	==> kernel <==
	 21:47:58 up 14 min,  0 users,  load average: 0.28, 0.20, 0.12
	Linux no-preload-324578 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] <==
	I0311 21:42:29.264775       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:44:28.264313       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:44:28.264662       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0311 21:44:29.265630       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:44:29.265747       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:44:29.265759       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:44:29.265806       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:44:29.265875       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:44:29.267055       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:45:29.266055       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:45:29.266262       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:45:29.266291       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:45:29.267386       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:45:29.267520       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:45:29.267559       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:47:29.267531       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:47:29.267955       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0311 21:47:29.268001       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:47:29.268127       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:47:29.268008       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:47:29.269817       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] <==
	I0311 21:42:13.225069       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:42:42.739149       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:42:43.234227       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:43:12.744391       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:43:13.242240       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:43:42.749484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:43:43.251229       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:44:12.754989       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:44:13.259581       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:44:42.759538       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:44:43.269139       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:45:12.765519       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:45:13.277121       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:45:35.814131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="233.247µs"
	E0311 21:45:42.771754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:45:43.285631       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:45:49.815247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="122.036µs"
	E0311 21:46:12.777754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:46:13.294210       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:46:42.782670       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:46:43.302060       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:47:12.788682       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:47:13.310225       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:47:42.795252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:47:43.319170       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] <==
	I0311 21:34:29.638221       1 server_others.go:72] "Using iptables proxy"
	I0311 21:34:29.650503       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	I0311 21:34:29.704068       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0311 21:34:29.704129       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:34:29.704155       1 server_others.go:168] "Using iptables Proxier"
	I0311 21:34:29.707921       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:34:29.708391       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0311 21:34:29.708440       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:34:29.709589       1 config.go:188] "Starting service config controller"
	I0311 21:34:29.709659       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:34:29.709683       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:34:29.709847       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:34:29.710031       1 config.go:315] "Starting node config controller"
	I0311 21:34:29.710061       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:34:29.809855       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:34:29.811050       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 21:34:29.811241       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] <==
	I0311 21:34:26.127922       1 serving.go:380] Generated self-signed cert in-memory
	W0311 21:34:28.232805       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 21:34:28.232921       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:34:28.232934       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 21:34:28.232941       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 21:34:28.298546       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0311 21:34:28.298649       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:34:28.300761       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 21:34:28.301037       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 21:34:28.301278       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 21:34:28.301467       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 21:34:28.402098       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:45:23 no-preload-324578 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:45:23 no-preload-324578 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:45:23 no-preload-324578 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:45:23 no-preload-324578 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:45:35 no-preload-324578 kubelet[1315]: E0311 21:45:35.796625    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:45:49 no-preload-324578 kubelet[1315]: E0311 21:45:49.795446    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:46:02 no-preload-324578 kubelet[1315]: E0311 21:46:02.794516    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:46:14 no-preload-324578 kubelet[1315]: E0311 21:46:14.794891    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:46:23 no-preload-324578 kubelet[1315]: E0311 21:46:23.811090    1315 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:46:23 no-preload-324578 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:46:23 no-preload-324578 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:46:23 no-preload-324578 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:46:23 no-preload-324578 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:46:27 no-preload-324578 kubelet[1315]: E0311 21:46:27.799256    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:46:41 no-preload-324578 kubelet[1315]: E0311 21:46:41.795127    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:46:56 no-preload-324578 kubelet[1315]: E0311 21:46:56.795070    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:47:08 no-preload-324578 kubelet[1315]: E0311 21:47:08.795385    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:47:23 no-preload-324578 kubelet[1315]: E0311 21:47:23.796029    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:47:23 no-preload-324578 kubelet[1315]: E0311 21:47:23.810220    1315 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:47:23 no-preload-324578 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:47:23 no-preload-324578 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:47:23 no-preload-324578 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:47:23 no-preload-324578 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:47:38 no-preload-324578 kubelet[1315]: E0311 21:47:38.795164    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:47:51 no-preload-324578 kubelet[1315]: E0311 21:47:51.795669    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	
	
	==> storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] <==
	I0311 21:35:00.142049       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 21:35:00.162424       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 21:35:00.162601       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 21:35:18.053813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 21:35:18.054343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88144734-da96-462d-b463-5b878079ac26", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-324578_4f8c71c4-91e4-4eb5-b31f-b50cae83aac9 became leader
	I0311 21:35:18.055390       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-324578_4f8c71c4-91e4-4eb5-b31f-b50cae83aac9!
	I0311 21:35:18.157284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-324578_4f8c71c4-91e4-4eb5-b31f-b50cae83aac9!
	
	
	==> storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] <==
	I0311 21:34:29.528325       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0311 21:34:59.531684       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-324578 -n no-preload-324578
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-324578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nv4gd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-324578 describe pod metrics-server-57f55c9bc5-nv4gd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-324578 describe pod metrics-server-57f55c9bc5-nv4gd: exit status 1 (65.618093ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nv4gd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-324578 describe pod metrics-server-57f55c9bc5-nv4gd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-743937 -n embed-certs-743937
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-11 21:48:53.272372973 +0000 UTC m=+5941.444047275
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-743937 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-743937 logs -n 25: (2.085623514s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-427678 sudo cat                              | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo find                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo crio                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-427678                                       | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-124446 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | disable-driver-mounts-124446                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:26 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-766430  | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-324578             | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:30:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:30:01.044166   70908 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:30:01.044254   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044259   70908 out.go:304] Setting ErrFile to fd 2...
	I0311 21:30:01.044263   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044451   70908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:30:01.044970   70908 out.go:298] Setting JSON to false
	I0311 21:30:01.045838   70908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7950,"bootTime":1710184651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:30:01.045894   70908 start.go:139] virtualization: kvm guest
	I0311 21:30:01.048311   70908 out.go:177] * [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:30:01.050003   70908 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:30:01.050011   70908 notify.go:220] Checking for updates...
	I0311 21:30:01.051498   70908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:30:01.052999   70908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:30:01.054439   70908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:30:01.055768   70908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:30:01.057137   70908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:30:01.058760   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:30:01.059167   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.059205   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.073734   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0311 21:30:01.074087   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.074586   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.074618   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.074966   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.075173   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.077005   70908 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 21:30:01.078583   70908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:30:01.078879   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.078914   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.093226   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0311 21:30:01.093614   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.094174   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.094243   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.094616   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.094805   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.128302   70908 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:30:01.129965   70908 start.go:297] selected driver: kvm2
	I0311 21:30:01.129991   70908 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.130113   70908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:30:01.131050   70908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.131115   70908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:30:01.145452   70908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:30:01.145782   70908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:30:01.145811   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:30:01.145819   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:30:01.145863   70908 start.go:340] cluster config:
	{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.145954   70908 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.147725   70908 out.go:177] * Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	I0311 21:30:01.148916   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:30:01.148943   70908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:30:01.148955   70908 cache.go:56] Caching tarball of preloaded images
	I0311 21:30:01.149022   70908 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:30:01.149032   70908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:30:01.149114   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:30:01.149263   70908 start.go:360] acquireMachinesLock for old-k8s-version-239315: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:30:05.352968   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:08.425086   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:14.504922   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:17.577080   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:23.656996   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:26.729009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:32.809042   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:35.881008   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:41.960992   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:45.033096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:51.112925   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:54.184989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:00.265058   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:03.337012   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:09.416960   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:12.489005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:18.569021   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:21.640990   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:27.721019   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:30.793040   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:36.872985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:39.945005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:46.025035   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:49.096988   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:55.176985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:58.249009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:04.328981   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:07.401006   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:13.480986   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:16.552965   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:22.632997   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:25.705064   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:31.784993   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:34.857027   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:40.937002   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:44.008989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:50.088959   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:53.161092   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:59.241045   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:02.313084   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:08.393056   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:11.465079   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:17.545057   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:20.617082   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:26.697000   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:29.768926   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:35.849024   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:38.921096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:41.925305   70458 start.go:364] duration metric: took 4m36.419231792s to acquireMachinesLock for "no-preload-324578"
	I0311 21:33:41.925360   70458 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:33:41.925368   70458 fix.go:54] fixHost starting: 
	I0311 21:33:41.925768   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:33:41.925798   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:33:41.940654   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0311 21:33:41.941130   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:33:41.941619   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:33:41.941646   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:33:41.942045   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:33:41.942209   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:33:41.942370   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:33:41.944009   70458 fix.go:112] recreateIfNeeded on no-preload-324578: state=Stopped err=<nil>
	I0311 21:33:41.944030   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	W0311 21:33:41.944231   70458 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:33:41.946020   70458 out.go:177] * Restarting existing kvm2 VM for "no-preload-324578" ...
	I0311 21:33:41.922711   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:33:41.922754   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923131   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:33:41.923158   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923430   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:33:41.925178   70417 machine.go:97] duration metric: took 4m37.414792129s to provisionDockerMachine
	I0311 21:33:41.925213   70417 fix.go:56] duration metric: took 4m37.435982654s for fixHost
	I0311 21:33:41.925219   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 4m37.436000925s
	W0311 21:33:41.925242   70417 start.go:713] error starting host: provision: host is not running
	W0311 21:33:41.925330   70417 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0311 21:33:41.925343   70417 start.go:728] Will try again in 5 seconds ...
	I0311 21:33:41.947495   70458 main.go:141] libmachine: (no-preload-324578) Calling .Start
	I0311 21:33:41.947676   70458 main.go:141] libmachine: (no-preload-324578) Ensuring networks are active...
	I0311 21:33:41.948386   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network default is active
	I0311 21:33:41.948724   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network mk-no-preload-324578 is active
	I0311 21:33:41.949117   70458 main.go:141] libmachine: (no-preload-324578) Getting domain xml...
	I0311 21:33:41.949876   70458 main.go:141] libmachine: (no-preload-324578) Creating domain...
	I0311 21:33:43.129733   70458 main.go:141] libmachine: (no-preload-324578) Waiting to get IP...
	I0311 21:33:43.130601   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.131006   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.131053   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.130975   71444 retry.go:31] will retry after 209.203314ms: waiting for machine to come up
	I0311 21:33:43.341724   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.342324   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.342361   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.342279   71444 retry.go:31] will retry after 375.396917ms: waiting for machine to come up
	I0311 21:33:43.718906   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.719329   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.719351   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.719288   71444 retry.go:31] will retry after 428.365393ms: waiting for machine to come up
	I0311 21:33:44.148895   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.149334   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.149358   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.149284   71444 retry.go:31] will retry after 561.478535ms: waiting for machine to come up
	I0311 21:33:44.712065   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.712548   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.712576   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.712465   71444 retry.go:31] will retry after 700.993236ms: waiting for machine to come up
	I0311 21:33:46.926379   70417 start.go:360] acquireMachinesLock for default-k8s-diff-port-766430: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:33:45.415695   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:45.416242   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:45.416276   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:45.416215   71444 retry.go:31] will retry after 809.474202ms: waiting for machine to come up
	I0311 21:33:46.227098   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:46.227573   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:46.227608   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:46.227520   71444 retry.go:31] will retry after 1.075187328s: waiting for machine to come up
	I0311 21:33:47.303981   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:47.304454   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:47.304483   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:47.304397   71444 retry.go:31] will retry after 1.145290319s: waiting for machine to come up
	I0311 21:33:48.451871   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:48.452316   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:48.452350   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:48.452267   71444 retry.go:31] will retry after 1.172261063s: waiting for machine to come up
	I0311 21:33:49.626502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:49.627067   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:49.627089   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:49.627023   71444 retry.go:31] will retry after 2.201479026s: waiting for machine to come up
	I0311 21:33:51.831519   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:51.831972   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:51.832008   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:51.831905   71444 retry.go:31] will retry after 2.888101699s: waiting for machine to come up
	I0311 21:33:54.721322   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:54.721753   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:54.721773   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:54.721722   71444 retry.go:31] will retry after 3.512655296s: waiting for machine to come up
	I0311 21:33:58.235767   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:58.236180   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:58.236219   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:58.236141   71444 retry.go:31] will retry after 3.975760652s: waiting for machine to come up
	I0311 21:34:03.525918   70604 start.go:364] duration metric: took 4m44.449252209s to acquireMachinesLock for "embed-certs-743937"
	I0311 21:34:03.525995   70604 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:03.526008   70604 fix.go:54] fixHost starting: 
	I0311 21:34:03.526428   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:03.526470   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:03.542427   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0311 21:34:03.542857   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:03.543292   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:34:03.543317   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:03.543616   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:03.543806   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:03.543991   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:34:03.545366   70604 fix.go:112] recreateIfNeeded on embed-certs-743937: state=Stopped err=<nil>
	I0311 21:34:03.545391   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	W0311 21:34:03.545540   70604 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:03.548158   70604 out.go:177] * Restarting existing kvm2 VM for "embed-certs-743937" ...
	I0311 21:34:03.549803   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Start
	I0311 21:34:03.549966   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring networks are active...
	I0311 21:34:03.550712   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network default is active
	I0311 21:34:03.551124   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network mk-embed-certs-743937 is active
	I0311 21:34:03.551528   70604 main.go:141] libmachine: (embed-certs-743937) Getting domain xml...
	I0311 21:34:03.552226   70604 main.go:141] libmachine: (embed-certs-743937) Creating domain...
	I0311 21:34:02.213709   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214152   70458 main.go:141] libmachine: (no-preload-324578) Found IP for machine: 192.168.39.36
	I0311 21:34:02.214181   70458 main.go:141] libmachine: (no-preload-324578) Reserving static IP address...
	I0311 21:34:02.214196   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has current primary IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214631   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.214655   70458 main.go:141] libmachine: (no-preload-324578) DBG | skip adding static IP to network mk-no-preload-324578 - found existing host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"}
	I0311 21:34:02.214666   70458 main.go:141] libmachine: (no-preload-324578) Reserved static IP address: 192.168.39.36
	I0311 21:34:02.214680   70458 main.go:141] libmachine: (no-preload-324578) Waiting for SSH to be available...
	I0311 21:34:02.214704   70458 main.go:141] libmachine: (no-preload-324578) DBG | Getting to WaitForSSH function...
	I0311 21:34:02.216798   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217068   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.217111   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217285   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH client type: external
	I0311 21:34:02.217316   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa (-rw-------)
	I0311 21:34:02.217356   70458 main.go:141] libmachine: (no-preload-324578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:02.217374   70458 main.go:141] libmachine: (no-preload-324578) DBG | About to run SSH command:
	I0311 21:34:02.217389   70458 main.go:141] libmachine: (no-preload-324578) DBG | exit 0
	I0311 21:34:02.340837   70458 main.go:141] libmachine: (no-preload-324578) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:02.341154   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetConfigRaw
	I0311 21:34:02.341752   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.344368   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344756   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.344791   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344942   70458 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/config.json ...
	I0311 21:34:02.345142   70458 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:02.345159   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:02.345353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.347647   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348001   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.348029   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348118   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.348284   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348432   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.348704   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.348913   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.348925   70458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:02.457273   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:02.457298   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457523   70458 buildroot.go:166] provisioning hostname "no-preload-324578"
	I0311 21:34:02.457554   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457757   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.460347   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460658   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.460688   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460913   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.461126   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461286   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461415   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.461574   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.461758   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.461775   70458 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-324578 && echo "no-preload-324578" | sudo tee /etc/hostname
	I0311 21:34:02.583388   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-324578
	
	I0311 21:34:02.583414   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.586043   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586399   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.586431   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586592   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.586799   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.586957   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.587084   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.587271   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.587433   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.587449   70458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-324578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-324578/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-324578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:02.702365   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:02.702399   70458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:02.702420   70458 buildroot.go:174] setting up certificates
	I0311 21:34:02.702431   70458 provision.go:84] configureAuth start
	I0311 21:34:02.702439   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.702725   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.705459   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.705882   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.705902   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.706048   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.708166   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708476   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.708502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708618   70458 provision.go:143] copyHostCerts
	I0311 21:34:02.708675   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:02.708684   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:02.708764   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:02.708875   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:02.708885   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:02.708911   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:02.708977   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:02.708984   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:02.709005   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:02.709063   70458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.no-preload-324578 san=[127.0.0.1 192.168.39.36 localhost minikube no-preload-324578]
	I0311 21:34:02.823423   70458 provision.go:177] copyRemoteCerts
	I0311 21:34:02.823484   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:02.823508   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.826221   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826538   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.826584   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826743   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.826974   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.827158   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.827344   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:02.912138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:02.938138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:34:02.967391   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:02.992208   70458 provision.go:87] duration metric: took 289.765831ms to configureAuth
	I0311 21:34:02.992232   70458 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:02.992376   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:02.992440   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.994808   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995124   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.995154   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995315   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.995490   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995640   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995818   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.995997   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.996187   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.996202   70458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:03.283611   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:03.283643   70458 machine.go:97] duration metric: took 938.487892ms to provisionDockerMachine
	I0311 21:34:03.283655   70458 start.go:293] postStartSetup for "no-preload-324578" (driver="kvm2")
	I0311 21:34:03.283667   70458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:03.283681   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.284008   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:03.284043   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.286802   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287220   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.287262   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287379   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.287546   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.287731   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.287930   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.372563   70458 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:03.377151   70458 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:03.377171   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:03.377225   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:03.377291   70458 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:03.377377   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:03.387792   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:03.412721   70458 start.go:296] duration metric: took 129.055927ms for postStartSetup
	I0311 21:34:03.412770   70458 fix.go:56] duration metric: took 21.487401487s for fixHost
	I0311 21:34:03.412790   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.415209   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415507   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.415533   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415668   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.415866   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416035   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416179   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.416353   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:03.416502   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:03.416513   70458 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:03.525759   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192843.475283818
	
	I0311 21:34:03.525781   70458 fix.go:216] guest clock: 1710192843.475283818
	I0311 21:34:03.525790   70458 fix.go:229] Guest: 2024-03-11 21:34:03.475283818 +0000 UTC Remote: 2024-03-11 21:34:03.412775346 +0000 UTC m=+298.052241307 (delta=62.508472ms)
	I0311 21:34:03.525815   70458 fix.go:200] guest clock delta is within tolerance: 62.508472ms
	I0311 21:34:03.525833   70458 start.go:83] releasing machines lock for "no-preload-324578", held for 21.600490138s
	I0311 21:34:03.525866   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.526157   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:03.528771   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529117   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.529143   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529272   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529721   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529897   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529978   70458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:03.530022   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.530124   70458 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:03.530151   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.532450   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532624   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532813   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.532843   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533001   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533010   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.533034   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533171   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533197   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533350   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533504   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533506   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.533639   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.614855   70458 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:03.638835   70458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:03.787832   70458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:03.794627   70458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:03.794677   70458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:03.811771   70458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:03.811790   70458 start.go:494] detecting cgroup driver to use...
	I0311 21:34:03.811845   70458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:03.829561   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:03.844536   70458 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:03.844582   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:03.859811   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:03.875041   70458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:03.991456   70458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:04.174783   70458 docker.go:233] disabling docker service ...
	I0311 21:34:04.174848   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:04.192524   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:04.206906   70458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:04.340047   70458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:04.455686   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:04.472512   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:04.495487   70458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:04.495550   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.506921   70458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:04.506997   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.519408   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.531418   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.543684   70458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:04.555846   70458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:04.567610   70458 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:04.567658   70458 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:04.583015   70458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:04.594515   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:04.715185   70458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:04.872750   70458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:04.872848   70458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:04.878207   70458 start.go:562] Will wait 60s for crictl version
	I0311 21:34:04.878250   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.882436   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:04.921007   70458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:04.921079   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.959326   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.997595   70458 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:34:04.999092   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:05.002092   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002526   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:05.002566   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002790   70458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:05.007758   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:05.023330   70458 kubeadm.go:877] updating cluster {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:05.023430   70458 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:34:05.023461   70458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:05.063043   70458 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:34:05.063071   70458 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:05.063161   70458 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.063170   70458 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.063183   70458 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.063190   70458 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.063233   70458 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0311 21:34:05.063171   70458 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.063272   70458 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.063307   70458 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065013   70458 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.065019   70458 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.065020   70458 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.065045   70458 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0311 21:34:05.065017   70458 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.065018   70458 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065064   70458 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.065365   70458 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.209182   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.211431   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.220663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.230965   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.237859   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0311 21:34:05.260820   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.288596   70458 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0311 21:34:05.288651   70458 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.288697   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.324896   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.342987   70458 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0311 21:34:05.343030   70458 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.343080   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.371663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.377262   70458 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0311 21:34:05.377306   70458 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.377349   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.792889   70604 main.go:141] libmachine: (embed-certs-743937) Waiting to get IP...
	I0311 21:34:04.793678   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:04.794097   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:04.794152   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:04.794064   71579 retry.go:31] will retry after 281.522937ms: waiting for machine to come up
	I0311 21:34:05.077518   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.077856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.077889   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.077814   71579 retry.go:31] will retry after 303.836522ms: waiting for machine to come up
	I0311 21:34:05.383244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.383796   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.383839   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.383758   71579 retry.go:31] will retry after 333.172379ms: waiting for machine to come up
	I0311 21:34:05.718117   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.718603   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.718630   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.718562   71579 retry.go:31] will retry after 469.046827ms: waiting for machine to come up
	I0311 21:34:06.189304   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.189748   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.189777   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.189705   71579 retry.go:31] will retry after 636.781259ms: waiting for machine to come up
	I0311 21:34:06.828672   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.829136   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.829174   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.829078   71579 retry.go:31] will retry after 758.609427ms: waiting for machine to come up
	I0311 21:34:07.589134   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:07.589490   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:07.589513   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:07.589466   71579 retry.go:31] will retry after 990.575872ms: waiting for machine to come up
	I0311 21:34:08.581971   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:08.582312   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:08.582344   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:08.582290   71579 retry.go:31] will retry after 1.142377902s: waiting for machine to come up
	I0311 21:34:05.421288   70458 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0311 21:34:05.421340   70458 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.421390   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473450   70458 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0311 21:34:05.473497   70458 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.473527   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.473545   70458 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0311 21:34:05.473584   70458 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.473603   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.473639   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473663   70458 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0311 21:34:05.473701   70458 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.473707   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.473730   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473766   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.569510   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.569615   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.578915   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0311 21:34:05.578979   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579007   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:05.579029   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.579077   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579117   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.579158   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.579209   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0311 21:34:05.579272   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:05.584413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0311 21:34:05.584425   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.584458   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.679191   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 21:34:05.679259   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0311 21:34:05.679288   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:05.679337   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679368   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0311 21:34:05.679369   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0311 21:34:05.679414   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679428   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0311 21:34:05.679485   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:07.621341   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.942028932s)
	I0311 21:34:07.621382   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0311 21:34:07.621385   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.941873405s)
	I0311 21:34:07.621413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621424   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (1.941989707s)
	I0311 21:34:07.621452   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621544   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.037072472s)
	I0311 21:34:07.621558   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0311 21:34:07.621580   70458 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:07.621627   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:09.726761   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:09.727207   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:09.727241   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:09.727153   71579 retry.go:31] will retry after 1.17092616s: waiting for machine to come up
	I0311 21:34:10.899311   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:10.899656   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:10.899675   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:10.899631   71579 retry.go:31] will retry after 1.870900402s: waiting for machine to come up
	I0311 21:34:12.771931   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:12.772421   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:12.772457   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:12.772375   71579 retry.go:31] will retry after 2.721804623s: waiting for machine to come up
	I0311 21:34:11.524646   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.902991705s)
	I0311 21:34:11.524683   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0311 21:34:11.524711   70458 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:11.524787   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:13.704750   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.179921724s)
	I0311 21:34:13.704786   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0311 21:34:13.704817   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:13.704868   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:15.496186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:15.496686   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:15.496722   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:15.496627   71579 retry.go:31] will retry after 2.568850361s: waiting for machine to come up
	I0311 21:34:18.068470   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:18.068926   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:18.068959   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:18.068872   71579 retry.go:31] will retry after 4.111366971s: waiting for machine to come up
	I0311 21:34:16.267427   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.562528088s)
	I0311 21:34:16.267458   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0311 21:34:16.267486   70458 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:16.267535   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:17.218029   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 21:34:17.218065   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:17.218104   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:18.987120   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.768996335s)
	I0311 21:34:18.987149   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0311 21:34:18.987167   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:18.987219   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:23.543571   70908 start.go:364] duration metric: took 4m22.394278247s to acquireMachinesLock for "old-k8s-version-239315"
	I0311 21:34:23.543649   70908 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:23.543661   70908 fix.go:54] fixHost starting: 
	I0311 21:34:23.544084   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:23.544139   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:23.561669   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0311 21:34:23.562158   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:23.562618   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:34:23.562645   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:23.562949   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:23.563114   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:23.563306   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetState
	I0311 21:34:23.565152   70908 fix.go:112] recreateIfNeeded on old-k8s-version-239315: state=Stopped err=<nil>
	I0311 21:34:23.565178   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	W0311 21:34:23.565351   70908 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:23.567943   70908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239315" ...
	I0311 21:34:22.182707   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183200   70604 main.go:141] libmachine: (embed-certs-743937) Found IP for machine: 192.168.50.114
	I0311 21:34:22.183228   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has current primary IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183238   70604 main.go:141] libmachine: (embed-certs-743937) Reserving static IP address...
	I0311 21:34:22.183694   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.183716   70604 main.go:141] libmachine: (embed-certs-743937) DBG | skip adding static IP to network mk-embed-certs-743937 - found existing host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"}
	I0311 21:34:22.183728   70604 main.go:141] libmachine: (embed-certs-743937) Reserved static IP address: 192.168.50.114
	I0311 21:34:22.183746   70604 main.go:141] libmachine: (embed-certs-743937) Waiting for SSH to be available...
	I0311 21:34:22.183760   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Getting to WaitForSSH function...
	I0311 21:34:22.185820   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186157   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.186193   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186285   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH client type: external
	I0311 21:34:22.186317   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa (-rw-------)
	I0311 21:34:22.186349   70604 main.go:141] libmachine: (embed-certs-743937) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:22.186368   70604 main.go:141] libmachine: (embed-certs-743937) DBG | About to run SSH command:
	I0311 21:34:22.186384   70604 main.go:141] libmachine: (embed-certs-743937) DBG | exit 0
	I0311 21:34:22.313253   70604 main.go:141] libmachine: (embed-certs-743937) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:22.313570   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetConfigRaw
	I0311 21:34:22.314271   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.317040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317404   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.317509   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317641   70604 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/config.json ...
	I0311 21:34:22.317814   70604 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:22.317830   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:22.318049   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.320550   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320833   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.320859   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320992   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.321223   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321405   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321547   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.321708   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.321930   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.321944   70604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:22.430028   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:22.430055   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430345   70604 buildroot.go:166] provisioning hostname "embed-certs-743937"
	I0311 21:34:22.430374   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430568   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.433555   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.433884   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.433907   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.434102   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.434311   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434474   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434611   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.434762   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.434936   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.434954   70604 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-743937 && echo "embed-certs-743937" | sudo tee /etc/hostname
	I0311 21:34:22.564819   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-743937
	
	I0311 21:34:22.564848   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.567667   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568075   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.568122   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568325   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.568519   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568719   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568913   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.569094   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.569335   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.569361   70604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-743937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-743937/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-743937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:22.684397   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:22.684425   70604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:22.684473   70604 buildroot.go:174] setting up certificates
	I0311 21:34:22.684490   70604 provision.go:84] configureAuth start
	I0311 21:34:22.684507   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.684840   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.687805   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688156   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.688178   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688401   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.690975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691302   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.691321   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691469   70604 provision.go:143] copyHostCerts
	I0311 21:34:22.691528   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:22.691540   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:22.691598   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:22.691690   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:22.691706   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:22.691729   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:22.691834   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:22.691850   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:22.691878   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:22.691946   70604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.embed-certs-743937 san=[127.0.0.1 192.168.50.114 embed-certs-743937 localhost minikube]
	I0311 21:34:22.838395   70604 provision.go:177] copyRemoteCerts
	I0311 21:34:22.838452   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:22.838478   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.840975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841308   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.841342   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841487   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.841684   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.841834   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.841968   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:22.924202   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:22.956079   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 21:34:22.982352   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 21:34:23.008286   70604 provision.go:87] duration metric: took 323.780619ms to configureAuth
	I0311 21:34:23.008311   70604 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:23.008481   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:34:23.008553   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.011128   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011439   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.011461   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011632   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.011780   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.011919   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.012094   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.012278   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.012436   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.012452   70604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:23.288122   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:23.288146   70604 machine.go:97] duration metric: took 970.321311ms to provisionDockerMachine
	I0311 21:34:23.288157   70604 start.go:293] postStartSetup for "embed-certs-743937" (driver="kvm2")
	I0311 21:34:23.288167   70604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:23.288180   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.288496   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:23.288532   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.291434   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.291823   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.291856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.292079   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.292297   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.292468   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.292629   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.376367   70604 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:23.381629   70604 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:23.381660   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:23.381754   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:23.381855   70604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:23.381967   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:23.392280   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:23.423241   70604 start.go:296] duration metric: took 135.071082ms for postStartSetup
	I0311 21:34:23.423283   70604 fix.go:56] duration metric: took 19.897275281s for fixHost
	I0311 21:34:23.423310   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.426264   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426623   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.426652   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426862   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.427052   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427256   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427419   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.427575   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.427809   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.427822   70604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:23.543425   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192863.499269756
	
	I0311 21:34:23.543447   70604 fix.go:216] guest clock: 1710192863.499269756
	I0311 21:34:23.543454   70604 fix.go:229] Guest: 2024-03-11 21:34:23.499269756 +0000 UTC Remote: 2024-03-11 21:34:23.423289031 +0000 UTC m=+304.494814333 (delta=75.980725ms)
	I0311 21:34:23.543472   70604 fix.go:200] guest clock delta is within tolerance: 75.980725ms
	I0311 21:34:23.543478   70604 start.go:83] releasing machines lock for "embed-certs-743937", held for 20.0175167s
	I0311 21:34:23.543504   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.543746   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:23.546763   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547188   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.547223   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547396   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.547882   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548077   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548163   70604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:23.548226   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.548282   70604 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:23.548309   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.551186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551485   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551609   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.551642   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551795   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.551979   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.552001   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.552035   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552146   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.552211   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552277   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552368   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.552501   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552666   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.660064   70604 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:23.668731   70604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:23.831784   70604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:23.840331   70604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:23.840396   70604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:23.864730   70604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:23.864766   70604 start.go:494] detecting cgroup driver to use...
	I0311 21:34:23.864831   70604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:23.886072   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:23.901660   70604 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:23.901727   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:23.917374   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:23.932525   70604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:24.066368   70604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:24.222425   70604 docker.go:233] disabling docker service ...
	I0311 21:34:24.222487   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:24.240937   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:24.257050   70604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:24.395003   70604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:24.550709   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:24.572524   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:24.599710   70604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:24.599776   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.612426   70604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:24.612514   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.626989   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.639576   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.653711   70604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:24.673581   70604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:24.684772   70604 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:24.684841   70604 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:24.707855   70604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:24.719801   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:24.904788   70604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:25.063437   70604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:25.063511   70604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:25.070294   70604 start.go:562] Will wait 60s for crictl version
	I0311 21:34:25.070352   70604 ssh_runner.go:195] Run: which crictl
	I0311 21:34:25.074945   70604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:25.121979   70604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:25.122070   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.159092   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.207391   70604 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:34:21.469205   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.481954559s)
	I0311 21:34:21.469242   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0311 21:34:21.469285   70458 cache_images.go:123] Successfully loaded all cached images
	I0311 21:34:21.469295   70458 cache_images.go:92] duration metric: took 16.40620232s to LoadCachedImages
	I0311 21:34:21.469306   70458 kubeadm.go:928] updating node { 192.168.39.36 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:34:21.469436   70458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-324578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:21.469513   70458 ssh_runner.go:195] Run: crio config
	I0311 21:34:21.531635   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:21.531659   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:21.531671   70458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:21.531690   70458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-324578 NodeName:no-preload-324578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:21.531820   70458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-324578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:21.531876   70458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:34:21.546000   70458 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:21.546060   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:21.558818   70458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0311 21:34:21.577685   70458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:34:21.595960   70458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0311 21:34:21.615003   70458 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:21.619290   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:21.633307   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:21.751586   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:21.771672   70458 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578 for IP: 192.168.39.36
	I0311 21:34:21.771698   70458 certs.go:194] generating shared ca certs ...
	I0311 21:34:21.771717   70458 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:21.771907   70458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:21.771975   70458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:21.771987   70458 certs.go:256] generating profile certs ...
	I0311 21:34:21.772093   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/client.key
	I0311 21:34:21.772190   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key.681a9200
	I0311 21:34:21.772244   70458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key
	I0311 21:34:21.772371   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:21.772421   70458 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:21.772435   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:21.772475   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:21.772509   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:21.772542   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:21.772606   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:21.773241   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:21.833566   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:21.868156   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:21.910118   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:21.952222   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:34:21.988148   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:22.018493   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:22.045225   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:22.071481   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:22.097525   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:22.123425   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:22.156613   70458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:22.174679   70458 ssh_runner.go:195] Run: openssl version
	I0311 21:34:22.181137   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:22.197490   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203508   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203556   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.210822   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:22.224269   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:22.237282   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242762   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242816   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.249334   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:22.261866   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:22.273674   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279004   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279055   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.285394   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:22.299493   70458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:22.304827   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:22.311349   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:22.318377   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:22.325621   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:22.332316   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:22.338893   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:22.345167   70458 kubeadm.go:391] StartCluster: {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:22.345246   70458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:22.345286   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.386703   70458 cri.go:89] found id: ""
	I0311 21:34:22.386785   70458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:22.398475   70458 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:22.398494   70458 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:22.398500   70458 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:22.398558   70458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:22.409434   70458 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:22.410675   70458 kubeconfig.go:125] found "no-preload-324578" server: "https://192.168.39.36:8443"
	I0311 21:34:22.412906   70458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:22.423677   70458 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.36
	I0311 21:34:22.423708   70458 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:22.423719   70458 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:22.423762   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.472548   70458 cri.go:89] found id: ""
	I0311 21:34:22.472615   70458 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:22.494701   70458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:22.506944   70458 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:22.506964   70458 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:22.507015   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:22.517468   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:22.517521   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:22.528281   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:22.538496   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:22.538533   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:22.553009   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.566120   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:22.566189   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.579239   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:22.590180   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:22.590227   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:22.602988   70458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:22.615631   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:22.730568   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.355205   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.588923   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.694870   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.796820   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:23.796918   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.297341   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.797197   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.840030   70458 api_server.go:72] duration metric: took 1.043209284s to wait for apiserver process to appear ...
	I0311 21:34:24.840062   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:24.840101   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:24.840560   70458 api_server.go:269] stopped: https://192.168.39.36:8443/healthz: Get "https://192.168.39.36:8443/healthz": dial tcp 192.168.39.36:8443: connect: connection refused
	I0311 21:34:25.341161   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:23.569356   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .Start
	I0311 21:34:23.569527   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring networks are active...
	I0311 21:34:23.570188   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network default is active
	I0311 21:34:23.570613   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network mk-old-k8s-version-239315 is active
	I0311 21:34:23.571070   70908 main.go:141] libmachine: (old-k8s-version-239315) Getting domain xml...
	I0311 21:34:23.571836   70908 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:34:24.895619   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting to get IP...
	I0311 21:34:24.896680   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:24.897160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:24.897218   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:24.897131   71714 retry.go:31] will retry after 268.563191ms: waiting for machine to come up
	I0311 21:34:25.167783   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.168312   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.168343   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.168268   71714 retry.go:31] will retry after 245.059124ms: waiting for machine to come up
	I0311 21:34:25.414644   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.415139   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.415168   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.415100   71714 retry.go:31] will retry after 407.807793ms: waiting for machine to come up
	I0311 21:34:25.824887   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.825351   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.825379   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.825274   71714 retry.go:31] will retry after 503.187834ms: waiting for machine to come up
	I0311 21:34:25.208819   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:25.211726   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212203   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:25.212244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212486   70604 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:25.217365   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:25.233670   70604 kubeadm.go:877] updating cluster {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:25.233825   70604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:34:25.233886   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:25.282028   70604 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:34:25.282108   70604 ssh_runner.go:195] Run: which lz4
	I0311 21:34:25.287047   70604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:25.291721   70604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:25.291751   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:34:27.414481   70604 crio.go:444] duration metric: took 2.127464595s to copy over tarball
	I0311 21:34:27.414554   70604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:28.225996   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.226031   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.226048   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.285274   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.285307   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.340493   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.512353   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.512409   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:28.840800   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.852523   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.852560   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.341135   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.354997   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:29.355028   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.840769   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.848023   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:34:29.856262   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:34:29.856290   70458 api_server.go:131] duration metric: took 5.016219789s to wait for apiserver health ...
	I0311 21:34:29.856300   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:29.856308   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:29.858297   70458 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:29.859734   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:29.891375   70458 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:29.932393   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:29.959208   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:29.959257   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:29.959268   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:29.959278   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:29.959290   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:29.959296   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:34:29.959303   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:29.959319   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:29.959335   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:34:29.959343   70458 system_pods.go:74] duration metric: took 26.926978ms to wait for pod list to return data ...
	I0311 21:34:29.959355   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:29.963151   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:29.963179   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:29.963193   70458 node_conditions.go:105] duration metric: took 3.825246ms to run NodePressure ...
	I0311 21:34:29.963209   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:26.330005   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:26.330547   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:26.330569   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:26.330464   71714 retry.go:31] will retry after 723.914956ms: waiting for machine to come up
	I0311 21:34:27.056271   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.056879   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.056901   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.056834   71714 retry.go:31] will retry after 693.583075ms: waiting for machine to come up
	I0311 21:34:27.752514   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.752958   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.752980   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.752916   71714 retry.go:31] will retry after 902.247864ms: waiting for machine to come up
	I0311 21:34:28.657551   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:28.658023   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:28.658079   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:28.658008   71714 retry.go:31] will retry after 1.140425887s: waiting for machine to come up
	I0311 21:34:29.800305   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:29.800824   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:29.800852   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:29.800774   71714 retry.go:31] will retry after 1.68593342s: waiting for machine to come up
	I0311 21:34:32.367999   70458 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.404768175s)
	I0311 21:34:32.368034   70458 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375444   70458 kubeadm.go:733] kubelet initialised
	I0311 21:34:32.375468   70458 kubeadm.go:734] duration metric: took 7.423643ms waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375477   70458 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:32.383579   70458 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.389728   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389755   70458 pod_ready.go:81] duration metric: took 6.144226ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.389766   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389775   70458 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.398797   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398822   70458 pod_ready.go:81] duration metric: took 9.033188ms for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.398833   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398841   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.407870   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407905   70458 pod_ready.go:81] duration metric: took 9.056349ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.407915   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407928   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.414434   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414455   70458 pod_ready.go:81] duration metric: took 6.519611ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.414463   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414468   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.771994   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772025   70458 pod_ready.go:81] duration metric: took 357.549783ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.772034   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772041   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.175562   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175595   70458 pod_ready.go:81] duration metric: took 403.546508ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.175608   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175617   70458 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.573749   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573777   70458 pod_ready.go:81] duration metric: took 398.141162ms for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.573789   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573799   70458 pod_ready.go:38] duration metric: took 1.198311127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:33.573862   70458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:34:33.592112   70458 ops.go:34] apiserver oom_adj: -16
	I0311 21:34:33.592148   70458 kubeadm.go:591] duration metric: took 11.193640837s to restartPrimaryControlPlane
	I0311 21:34:33.592161   70458 kubeadm.go:393] duration metric: took 11.247001751s to StartCluster
	I0311 21:34:33.592181   70458 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.592269   70458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:33.594144   70458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.594461   70458 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:34:33.596303   70458 out.go:177] * Verifying Kubernetes components...
	I0311 21:34:33.594553   70458 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:34:33.594702   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:33.597724   70458 addons.go:69] Setting default-storageclass=true in profile "no-preload-324578"
	I0311 21:34:33.597727   70458 addons.go:69] Setting storage-provisioner=true in profile "no-preload-324578"
	I0311 21:34:33.597739   70458 addons.go:69] Setting metrics-server=true in profile "no-preload-324578"
	I0311 21:34:33.597759   70458 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-324578"
	I0311 21:34:33.597771   70458 addons.go:234] Setting addon storage-provisioner=true in "no-preload-324578"
	I0311 21:34:33.597772   70458 addons.go:234] Setting addon metrics-server=true in "no-preload-324578"
	W0311 21:34:33.597780   70458 addons.go:243] addon storage-provisioner should already be in state true
	W0311 21:34:33.597795   70458 addons.go:243] addon metrics-server should already be in state true
	I0311 21:34:33.597828   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597838   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597733   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:33.598079   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598110   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598224   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598260   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598305   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598269   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.613473   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0311 21:34:33.613994   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.614558   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.614576   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.614946   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.615385   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.615415   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.618026   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0311 21:34:33.618201   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0311 21:34:33.618370   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618497   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618818   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618833   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.618978   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618989   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.619157   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619343   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.619389   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619926   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.619956   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.623211   70458 addons.go:234] Setting addon default-storageclass=true in "no-preload-324578"
	W0311 21:34:33.623232   70458 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:34:33.623260   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.623634   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.623660   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.635263   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I0311 21:34:33.635575   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.636071   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.636080   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.636462   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.636606   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.638520   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.640583   70458 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:34:33.642029   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:34:33.642045   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:34:33.642058   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.640562   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0311 21:34:33.641020   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0311 21:34:33.642572   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.643082   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.643107   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.643432   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.644002   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.644030   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.644213   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.644711   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.644733   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.645120   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.645334   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.645406   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.645861   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.645888   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.646042   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.646332   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.646548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.646719   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.646986   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.648681   70458 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:30.659466   70604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.244884989s)
	I0311 21:34:30.659492   70604 crio.go:451] duration metric: took 3.244983149s to extract the tarball
	I0311 21:34:30.659500   70604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:30.708661   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:30.769502   70604 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:34:30.769530   70604 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:34:30.769540   70604 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.28.4 crio true true} ...
	I0311 21:34:30.769675   70604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-743937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:30.769757   70604 ssh_runner.go:195] Run: crio config
	I0311 21:34:30.820223   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:30.820251   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:30.820267   70604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:30.820296   70604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-743937 NodeName:embed-certs-743937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:30.820475   70604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-743937"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:30.820563   70604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:34:30.833086   70604 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:30.833175   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:30.844335   70604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0311 21:34:30.863586   70604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:30.883598   70604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0311 21:34:30.904711   70604 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:30.909433   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:30.924054   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:31.064573   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:31.096931   70604 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937 for IP: 192.168.50.114
	I0311 21:34:31.096960   70604 certs.go:194] generating shared ca certs ...
	I0311 21:34:31.096980   70604 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:31.097157   70604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:31.097220   70604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:31.097236   70604 certs.go:256] generating profile certs ...
	I0311 21:34:31.097368   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/client.key
	I0311 21:34:31.097453   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key.c230aed9
	I0311 21:34:31.097520   70604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key
	I0311 21:34:31.097660   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:31.097709   70604 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:31.097770   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:31.097826   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:31.097867   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:31.097899   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:31.097958   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:31.098771   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:31.135109   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:31.173483   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:31.215059   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:31.253244   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0311 21:34:31.305450   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:31.340238   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:31.366993   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:31.393936   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:31.420998   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:31.446500   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:31.474047   70604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:31.493935   70604 ssh_runner.go:195] Run: openssl version
	I0311 21:34:31.500607   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:31.513874   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519255   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519303   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.525967   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:31.538995   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:31.551625   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557235   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557292   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.563658   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:31.576689   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:31.589299   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594405   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594453   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.601041   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:31.619307   70604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:31.624565   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:31.632121   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:31.638843   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:31.646400   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:31.652701   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:31.659661   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:31.666390   70604 kubeadm.go:391] StartCluster: {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:31.666496   70604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:31.666546   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.716714   70604 cri.go:89] found id: ""
	I0311 21:34:31.716796   70604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:31.733945   70604 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:31.733967   70604 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:31.733974   70604 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:31.734019   70604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:31.746543   70604 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:31.747720   70604 kubeconfig.go:125] found "embed-certs-743937" server: "https://192.168.50.114:8443"
	I0311 21:34:31.749670   70604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:31.762374   70604 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0311 21:34:31.762401   70604 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:31.762410   70604 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:31.762462   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.811965   70604 cri.go:89] found id: ""
	I0311 21:34:31.812055   70604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:31.836539   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:31.849272   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:31.849295   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:31.849348   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:31.861345   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:31.861423   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:31.875436   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:31.887183   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:31.887251   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:31.900032   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.911614   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:31.911690   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.924791   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:31.937131   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:31.937204   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:31.949123   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:31.960234   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.089622   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.806370   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.033263   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.135981   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.248827   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:33.248917   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.749207   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.650190   70458 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:33.650207   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:34:33.650223   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.653451   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.653895   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.653920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.654131   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.654302   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.654472   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.654631   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.689121   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0311 21:34:33.689487   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.693084   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.693105   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.693596   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.693796   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.696074   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.696629   70458 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:33.696644   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:34:33.696662   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.699920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700323   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.700342   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700564   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.700756   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.700859   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.700932   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.896331   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:33.969322   70458 node_ready.go:35] waiting up to 6m0s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:34.037114   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:34.059051   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:34:34.059080   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:34:34.094822   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:34.142231   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:34:34.142259   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:34:34.218979   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:34.219002   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:34:34.260381   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:35.648210   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.61103949s)
	I0311 21:34:35.648241   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.553388189s)
	I0311 21:34:35.648344   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648381   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648367   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648409   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648658   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.648675   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.648685   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648694   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648754   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.648997   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.649019   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650050   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650068   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650091   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.650101   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.650367   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650384   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.658738   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.658764   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.658991   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.659007   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687393   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.426969773s)
	I0311 21:34:35.687453   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687467   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687771   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.687810   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687828   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687848   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687831   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688142   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688164   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.688178   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.688214   70458 addons.go:470] Verifying addon metrics-server=true in "no-preload-324578"
	I0311 21:34:35.690413   70458 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:34:31.488010   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:31.488449   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:31.488471   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:31.488421   71714 retry.go:31] will retry after 2.325869089s: waiting for machine to come up
	I0311 21:34:33.815568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:33.816215   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:33.816236   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:33.816176   71714 retry.go:31] will retry after 2.457084002s: waiting for machine to come up
	I0311 21:34:34.249462   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.749177   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.778830   70604 api_server.go:72] duration metric: took 1.530004395s to wait for apiserver process to appear ...
	I0311 21:34:34.778858   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:34.778879   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:34.779469   70604 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0311 21:34:35.279027   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.110193   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.110221   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.110234   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.159861   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.159909   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.279045   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.289460   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.289491   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:38.779423   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.785174   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.785206   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.278910   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.290017   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:39.290054   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.779616   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.786362   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:34:39.794557   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:34:39.794583   70604 api_server.go:131] duration metric: took 5.01571788s to wait for apiserver health ...
	I0311 21:34:39.794594   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:39.794601   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:39.796063   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:35.691844   70458 addons.go:505] duration metric: took 2.097304232s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:34:35.974533   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:37.983073   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:38.977713   70458 node_ready.go:49] node "no-preload-324578" has status "Ready":"True"
	I0311 21:34:38.977738   70458 node_ready.go:38] duration metric: took 5.008382488s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:38.977749   70458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:38.986414   70458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993430   70458 pod_ready.go:92] pod "coredns-76f75df574-s6lsb" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:38.993454   70458 pod_ready.go:81] duration metric: took 7.012539ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993465   70458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:36.274640   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:36.275119   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:36.275157   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:36.275064   71714 retry.go:31] will retry after 3.618026102s: waiting for machine to come up
	I0311 21:34:39.894877   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:39.895397   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:39.895447   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:39.895343   71714 retry.go:31] will retry after 3.826847061s: waiting for machine to come up
	I0311 21:34:39.797420   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:39.810877   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:39.836773   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:39.852496   70604 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:39.852541   70604 system_pods.go:61] "coredns-5dd5756b68-czng9" [a57d0643-36c5-44e2-a113-de051d0e0408] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:39.852556   70604 system_pods.go:61] "etcd-embed-certs-743937" [9f0051e8-247f-4968-a834-c38c5f0c4407] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:39.852567   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [4ac979a6-1906-4a58-9d41-9587d66d81ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:39.852578   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [263ba100-e911-4857-a973-c4dc9312a653] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:39.852591   70604 system_pods.go:61] "kube-proxy-n2qzt" [21f56cfb-a3f5-4c4b-993d-53b6d8f60ec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:34:39.852600   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [0121fa4d-91a8-432b-9f21-c6e8c0b33872] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:39.852606   70604 system_pods.go:61] "metrics-server-57f55c9bc5-7qw98" [3d3f2e87-2e36-4ca3-b31c-fc5f38251f03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:39.852617   70604 system_pods.go:61] "storage-provisioner" [72fd13c7-1a79-4e8a-bdc2-f45117599d85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:34:39.852624   70604 system_pods.go:74] duration metric: took 15.823708ms to wait for pod list to return data ...
	I0311 21:34:39.852634   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:39.856288   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:39.856309   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:39.856317   70604 node_conditions.go:105] duration metric: took 3.676347ms to run NodePressure ...
	I0311 21:34:39.856331   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:40.103882   70604 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108726   70604 kubeadm.go:733] kubelet initialised
	I0311 21:34:40.108758   70604 kubeadm.go:734] duration metric: took 4.847245ms waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108768   70604 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:40.115566   70604 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:42.124435   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.026187   70417 start.go:364] duration metric: took 58.09976601s to acquireMachinesLock for "default-k8s-diff-port-766430"
	I0311 21:34:45.026231   70417 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:45.026242   70417 fix.go:54] fixHost starting: 
	I0311 21:34:45.026632   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:45.026661   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:45.046341   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0311 21:34:45.046779   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:45.047336   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:34:45.047375   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:45.047741   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:45.047920   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:34:45.048090   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:34:45.049581   70417 fix.go:112] recreateIfNeeded on default-k8s-diff-port-766430: state=Stopped err=<nil>
	I0311 21:34:45.049605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	W0311 21:34:45.049759   70417 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:45.051505   70417 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-766430" ...
	I0311 21:34:41.001474   70458 pod_ready.go:102] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:43.500991   70458 pod_ready.go:92] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.501018   70458 pod_ready.go:81] duration metric: took 4.507545237s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.501030   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506732   70458 pod_ready.go:92] pod "kube-apiserver-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.506753   70458 pod_ready.go:81] duration metric: took 5.714866ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506764   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511432   70458 pod_ready.go:92] pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.511456   70458 pod_ready.go:81] duration metric: took 4.684021ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511469   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516333   70458 pod_ready.go:92] pod "kube-proxy-rmz4b" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.516360   70458 pod_ready.go:81] duration metric: took 4.882955ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516370   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521501   70458 pod_ready.go:92] pod "kube-scheduler-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.521524   70458 pod_ready.go:81] duration metric: took 5.146945ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521532   70458 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.723851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724335   70908 main.go:141] libmachine: (old-k8s-version-239315) Found IP for machine: 192.168.72.52
	I0311 21:34:43.724367   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserving static IP address...
	I0311 21:34:43.724382   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has current primary IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724722   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.724759   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserved static IP address: 192.168.72.52
	I0311 21:34:43.724774   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | skip adding static IP to network mk-old-k8s-version-239315 - found existing host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"}
	I0311 21:34:43.724797   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Getting to WaitForSSH function...
	I0311 21:34:43.724815   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting for SSH to be available...
	I0311 21:34:43.727015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727330   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.727354   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727541   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH client type: external
	I0311 21:34:43.727568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa (-rw-------)
	I0311 21:34:43.727624   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:43.727641   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | About to run SSH command:
	I0311 21:34:43.727651   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | exit 0
	I0311 21:34:43.848884   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:43.849287   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:34:43.850084   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:43.852942   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853529   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.853572   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853801   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:34:43.854001   70908 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:43.854024   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:43.854255   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.856623   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857153   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.857187   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857321   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.857516   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857702   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857897   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.858105   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.858332   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.858349   70908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:43.961617   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:43.961664   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.961921   70908 buildroot.go:166] provisioning hostname "old-k8s-version-239315"
	I0311 21:34:43.961945   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.962134   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.964672   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.964987   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.965015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.965122   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.965305   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965466   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965591   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.965801   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.966042   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.966055   70908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239315 && echo "old-k8s-version-239315" | sudo tee /etc/hostname
	I0311 21:34:44.088097   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239315
	
	I0311 21:34:44.088126   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.090911   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091167   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.091205   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091347   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.091524   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091680   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091818   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.091984   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.092185   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.092205   70908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239315/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:44.207643   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:44.207674   70908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:44.207693   70908 buildroot.go:174] setting up certificates
	I0311 21:34:44.207701   70908 provision.go:84] configureAuth start
	I0311 21:34:44.207710   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:44.207975   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:44.211160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211556   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.211588   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211754   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.214211   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214553   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.214585   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214732   70908 provision.go:143] copyHostCerts
	I0311 21:34:44.214797   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:44.214813   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:44.214886   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:44.214991   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:44.215005   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:44.215035   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:44.215160   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:44.215171   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:44.215198   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:44.215267   70908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239315 san=[127.0.0.1 192.168.72.52 localhost minikube old-k8s-version-239315]
	I0311 21:34:44.305250   70908 provision.go:177] copyRemoteCerts
	I0311 21:34:44.305329   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:44.305367   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.308244   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308636   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.308673   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308874   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.309092   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.309290   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.309446   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.394958   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:44.423314   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 21:34:44.459338   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:44.491201   70908 provision.go:87] duration metric: took 283.487383ms to configureAuth
	I0311 21:34:44.491232   70908 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:44.491419   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:34:44.491484   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.494039   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494476   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.494509   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494638   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.494830   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.494998   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.495175   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.495366   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.495548   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.495570   70908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:44.787935   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:44.787961   70908 machine.go:97] duration metric: took 933.945971ms to provisionDockerMachine
	I0311 21:34:44.787971   70908 start.go:293] postStartSetup for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:34:44.787983   70908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:44.788007   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:44.788327   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:44.788355   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.791133   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791460   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.791492   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791637   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.791858   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.792021   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.792165   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.877163   70908 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:44.882141   70908 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:44.882164   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:44.882241   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:44.882330   70908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:44.882442   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:44.894699   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:44.919809   70908 start.go:296] duration metric: took 131.8264ms for postStartSetup
	I0311 21:34:44.919848   70908 fix.go:56] duration metric: took 21.376188092s for fixHost
	I0311 21:34:44.919867   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.922414   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922708   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.922738   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922876   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.923075   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923274   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923455   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.923618   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.923806   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.923831   70908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:45.026068   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192885.004450463
	
	I0311 21:34:45.026088   70908 fix.go:216] guest clock: 1710192885.004450463
	I0311 21:34:45.026096   70908 fix.go:229] Guest: 2024-03-11 21:34:45.004450463 +0000 UTC Remote: 2024-03-11 21:34:44.919851167 +0000 UTC m=+283.922086595 (delta=84.599296ms)
	I0311 21:34:45.026118   70908 fix.go:200] guest clock delta is within tolerance: 84.599296ms
	I0311 21:34:45.026124   70908 start.go:83] releasing machines lock for "old-k8s-version-239315", held for 21.482500591s
	I0311 21:34:45.026158   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.026440   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:45.029366   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029778   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.029813   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029992   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030514   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030711   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030800   70908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:45.030846   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.030946   70908 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:45.030971   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.033851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.033989   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034264   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034292   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034324   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034348   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034429   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034618   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034633   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034799   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034814   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034979   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034977   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.035143   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.135748   70908 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:45.142408   70908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:45.297445   70908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:45.304482   70908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:45.304552   70908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:45.322754   70908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:45.322775   70908 start.go:494] detecting cgroup driver to use...
	I0311 21:34:45.322832   70908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:45.345988   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:45.363267   70908 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:45.363320   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:45.380892   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:45.396972   70908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:45.531640   70908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:45.700243   70908 docker.go:233] disabling docker service ...
	I0311 21:34:45.700306   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:45.730542   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:45.749068   70908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:45.903721   70908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:46.045122   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:46.065278   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:46.090726   70908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:34:46.090779   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.105783   70908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:46.105841   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.121702   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.136262   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.150628   70908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:46.163771   70908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:46.175613   70908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:46.175675   70908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:46.193848   70908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:46.205694   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:46.344832   70908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:46.501773   70908 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:46.501851   70908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:46.507932   70908 start.go:562] Will wait 60s for crictl version
	I0311 21:34:46.507988   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:46.512337   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:46.555165   70908 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:46.555249   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.588554   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.623785   70908 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:34:44.627149   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.128405   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.052882   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Start
	I0311 21:34:45.053039   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring networks are active...
	I0311 21:34:45.053710   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network default is active
	I0311 21:34:45.054156   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network mk-default-k8s-diff-port-766430 is active
	I0311 21:34:45.054499   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Getting domain xml...
	I0311 21:34:45.055347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Creating domain...
	I0311 21:34:46.378216   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting to get IP...
	I0311 21:34:46.379054   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379376   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379485   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.379392   71893 retry.go:31] will retry after 242.915621ms: waiting for machine to come up
	I0311 21:34:46.623729   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624348   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624375   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.624304   71893 retry.go:31] will retry after 274.237436ms: waiting for machine to come up
	I0311 21:34:46.899864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.900296   71893 retry.go:31] will retry after 333.693752ms: waiting for machine to come up
	I0311 21:34:47.235751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236278   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236309   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.236220   71893 retry.go:31] will retry after 513.728994ms: waiting for machine to come up
	I0311 21:34:47.752081   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752622   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.752553   71893 retry.go:31] will retry after 575.202217ms: waiting for machine to come up
	I0311 21:34:48.329095   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329524   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329557   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:48.329477   71893 retry.go:31] will retry after 741.05703ms: waiting for machine to come up
	I0311 21:34:49.072641   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073163   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073195   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.073101   71893 retry.go:31] will retry after 802.911807ms: waiting for machine to come up
	I0311 21:34:45.528876   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.530391   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:49.530451   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:46.625154   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:46.627732   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628080   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:46.628102   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628304   70908 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:46.633367   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:46.649537   70908 kubeadm.go:877] updating cluster {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:46.649677   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:34:46.649733   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:46.699194   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:46.699264   70908 ssh_runner.go:195] Run: which lz4
	I0311 21:34:46.703944   70908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:46.709224   70908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:46.709258   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:34:48.747926   70908 crio.go:444] duration metric: took 2.044006932s to copy over tarball
	I0311 21:34:48.747994   70908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:49.629334   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:51.122454   70604 pod_ready.go:92] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:51.122481   70604 pod_ready.go:81] duration metric: took 11.006878828s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:51.122494   70604 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.227971   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.228001   70604 pod_ready.go:81] duration metric: took 1.105498501s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.228014   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234804   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.234834   70604 pod_ready.go:81] duration metric: took 6.811865ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234854   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241448   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.241473   70604 pod_ready.go:81] duration metric: took 6.611927ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241486   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249614   70604 pod_ready.go:92] pod "kube-proxy-n2qzt" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.249648   70604 pod_ready.go:81] duration metric: took 8.154372ms for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249661   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139924   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:53.139951   70604 pod_ready.go:81] duration metric: took 890.27792ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139961   70604 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:49.877965   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878438   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878460   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.878397   71893 retry.go:31] will retry after 1.163030899s: waiting for machine to come up
	I0311 21:34:51.042660   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043181   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043210   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:51.043131   71893 retry.go:31] will retry after 1.225509553s: waiting for machine to come up
	I0311 21:34:52.269779   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270321   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270358   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:52.270250   71893 retry.go:31] will retry after 2.091046831s: waiting for machine to come up
	I0311 21:34:54.363231   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363664   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363693   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:54.363618   71893 retry.go:31] will retry after 1.759309864s: waiting for machine to come up
	I0311 21:34:52.031032   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:54.529537   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:52.300295   70908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55227284s)
	I0311 21:34:52.300322   70908 crio.go:451] duration metric: took 3.552370125s to extract the tarball
	I0311 21:34:52.300331   70908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:52.349405   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:52.395791   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:52.395821   70908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:52.395892   70908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.395955   70908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.396002   70908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.396010   70908 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.395959   70908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.395932   70908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.395921   70908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:34:52.395974   70908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397721   70908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.397760   70908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.397767   70908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.397768   70908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.397762   70908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397804   70908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.398008   70908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:34:52.398129   70908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.548255   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.549300   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.560293   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.564094   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.564433   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.569516   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.578251   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:34:52.674385   70908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:34:52.674427   70908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.674475   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.725602   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.741797   70908 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:34:52.741840   70908 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.741882   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.793195   70908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:34:52.793239   70908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.793278   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798118   70908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:34:52.798174   70908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.798220   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798241   70908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:34:52.798277   70908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.798312   70908 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:34:52.798333   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798285   70908 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:34:52.798378   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.798399   70908 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.798434   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798336   70908 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:34:52.798510   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.957658   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.957712   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.957765   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.957816   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.957846   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:34:52.957904   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:34:52.957925   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:53.106649   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:34:53.106699   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:34:53.106913   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:34:53.107837   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:34:53.116024   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:34:53.122060   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:34:53.122118   70908 cache_images.go:92] duration metric: took 726.282306ms to LoadCachedImages
	W0311 21:34:53.122205   70908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0311 21:34:53.122224   70908 kubeadm.go:928] updating node { 192.168.72.52 8443 v1.20.0 crio true true} ...
	I0311 21:34:53.122341   70908 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239315 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:53.122443   70908 ssh_runner.go:195] Run: crio config
	I0311 21:34:53.192161   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:34:53.192191   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:53.192211   70908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:53.192233   70908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239315 NodeName:old-k8s-version-239315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:34:53.192405   70908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239315"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:53.192476   70908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:34:53.203965   70908 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:53.204019   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:53.215221   70908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0311 21:34:53.235943   70908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:53.255383   70908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0311 21:34:53.276634   70908 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:53.281778   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:53.298479   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:53.450052   70908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:53.472459   70908 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315 for IP: 192.168.72.52
	I0311 21:34:53.472480   70908 certs.go:194] generating shared ca certs ...
	I0311 21:34:53.472524   70908 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:53.472676   70908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:53.472728   70908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:53.472771   70908 certs.go:256] generating profile certs ...
	I0311 21:34:53.472883   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key
	I0311 21:34:53.472954   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1
	I0311 21:34:53.473013   70908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key
	I0311 21:34:53.473143   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:53.473185   70908 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:53.473198   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:53.473237   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:53.473272   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:53.473307   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:53.473363   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:53.473988   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:53.527429   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:53.575908   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:53.622438   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:53.665366   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 21:34:53.702121   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0311 21:34:53.746066   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:53.779151   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:53.813286   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:53.847058   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:53.882261   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:53.912444   70908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:53.932592   70908 ssh_runner.go:195] Run: openssl version
	I0311 21:34:53.939200   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:53.955630   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960866   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960920   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.967258   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:53.981075   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:53.995065   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000196   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000272   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.008574   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:54.022782   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:54.037409   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042893   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042965   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.049497   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:54.062597   70908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:54.067971   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:54.074746   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:54.081323   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:54.088762   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:54.095529   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:54.102396   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:54.109553   70908 kubeadm.go:391] StartCluster: {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:54.109639   70908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:54.109689   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.152063   70908 cri.go:89] found id: ""
	I0311 21:34:54.152143   70908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:54.163988   70908 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:54.164005   70908 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:54.164011   70908 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:54.164050   70908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:54.175616   70908 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:54.176779   70908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:54.177542   70908 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-11004/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239315" cluster setting kubeconfig missing "old-k8s-version-239315" context setting]
	I0311 21:34:54.178649   70908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:54.180405   70908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:54.191864   70908 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.52
	I0311 21:34:54.191891   70908 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:54.191903   70908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:54.191948   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.233779   70908 cri.go:89] found id: ""
	I0311 21:34:54.233852   70908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:54.253672   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:54.266010   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:54.266038   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:54.266085   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:54.277867   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:54.277918   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:54.288984   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:54.300133   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:54.300197   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:54.312090   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.323997   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:54.324059   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.337225   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:54.348223   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:54.348266   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:54.359245   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:54.370003   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:54.525972   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.408437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.676995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.819933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.913736   70908 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:55.913811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:55.147500   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:57.148276   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.124678   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125150   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125183   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:56.125101   71893 retry.go:31] will retry after 2.284226205s: waiting for machine to come up
	I0311 21:34:58.412391   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.412973   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.413002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:58.412923   71893 retry.go:31] will retry after 4.532871869s: waiting for machine to come up
	I0311 21:34:57.031683   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:59.032261   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:56.914753   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.413928   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.914123   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.413931   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.914199   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.414205   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.913880   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.914121   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.148774   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.646997   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:03.647990   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:02.948316   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948762   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948790   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:35:02.948704   71893 retry.go:31] will retry after 4.885152649s: waiting for machine to come up
	I0311 21:35:01.529589   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:04.028860   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.414003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:01.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.913977   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.414740   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.914735   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.414726   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.914846   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.914715   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.648516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:08.147744   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:07.835002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.835551   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Found IP for machine: 192.168.61.11
	I0311 21:35:07.835585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserving static IP address...
	I0311 21:35:07.835601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has current primary IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.836026   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.836055   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | skip adding static IP to network mk-default-k8s-diff-port-766430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"}
	I0311 21:35:07.836075   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserved static IP address: 192.168.61.11
	I0311 21:35:07.836110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Getting to WaitForSSH function...
	I0311 21:35:07.836125   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for SSH to be available...
	I0311 21:35:07.838230   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.838631   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838757   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH client type: external
	I0311 21:35:07.838784   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa (-rw-------)
	I0311 21:35:07.838830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:35:07.838871   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | About to run SSH command:
	I0311 21:35:07.838897   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | exit 0
	I0311 21:35:07.968765   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | SSH cmd err, output: <nil>: 
	I0311 21:35:07.969119   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetConfigRaw
	I0311 21:35:07.969756   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:07.972490   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.972921   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.972949   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.973180   70417 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/config.json ...
	I0311 21:35:07.973362   70417 machine.go:94] provisionDockerMachine start ...
	I0311 21:35:07.973381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:07.973582   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:07.975926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.976277   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976419   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:07.976566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976704   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976847   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:07.976991   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:07.977161   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:07.977171   70417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:35:08.093841   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:35:08.093864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094076   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:35:08.094100   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094329   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.097134   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097498   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.097528   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097670   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.097854   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098021   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.098409   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.098642   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.098657   70417 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766430 && echo "default-k8s-diff-port-766430" | sudo tee /etc/hostname
	I0311 21:35:08.233860   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766430
	
	I0311 21:35:08.233890   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.236977   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237387   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.237408   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237596   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.237791   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.237962   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.238194   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.238359   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.238515   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.238532   70417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:35:08.363393   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:35:08.363419   70417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:35:08.363471   70417 buildroot.go:174] setting up certificates
	I0311 21:35:08.363484   70417 provision.go:84] configureAuth start
	I0311 21:35:08.363497   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.363780   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:08.366605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.366990   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.367012   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.367139   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.369314   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369650   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.369676   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369798   70417 provision.go:143] copyHostCerts
	I0311 21:35:08.369853   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:35:08.369863   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:35:08.369915   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:35:08.370005   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:35:08.370013   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:35:08.370032   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:35:08.370091   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:35:08.370098   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:35:08.370114   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:35:08.370169   70417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766430 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-766430 localhost minikube]
	I0311 21:35:08.542469   70417 provision.go:177] copyRemoteCerts
	I0311 21:35:08.542529   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:35:08.542550   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.545388   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545750   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.545782   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545958   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.546115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.546264   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.546360   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:08.635866   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:35:08.667490   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0311 21:35:08.697944   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:35:08.726836   70417 provision.go:87] duration metric: took 363.34159ms to configureAuth
	I0311 21:35:08.726860   70417 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:35:08.727033   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:35:08.727115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.730050   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730458   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.730489   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730788   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.730987   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731170   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731317   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.731466   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.731607   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.731629   70417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:35:09.035100   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:35:09.035129   70417 machine.go:97] duration metric: took 1.061753229s to provisionDockerMachine
	I0311 21:35:09.035142   70417 start.go:293] postStartSetup for "default-k8s-diff-port-766430" (driver="kvm2")
	I0311 21:35:09.035151   70417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:35:09.035165   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.035458   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:35:09.035484   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.038340   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038638   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.038668   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038829   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.039027   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.039178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.039343   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.133013   70417 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:35:09.138043   70417 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:35:09.138065   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:35:09.138166   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:35:09.138259   70417 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:35:09.138364   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:35:09.149527   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:09.176424   70417 start.go:296] duration metric: took 141.271199ms for postStartSetup
	I0311 21:35:09.176460   70417 fix.go:56] duration metric: took 24.15021813s for fixHost
	I0311 21:35:09.176479   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.179447   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.179830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.179859   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.180147   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.180402   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180758   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.180974   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:09.181186   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:09.181200   70417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:35:09.297740   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192909.282566583
	
	I0311 21:35:09.297764   70417 fix.go:216] guest clock: 1710192909.282566583
	I0311 21:35:09.297773   70417 fix.go:229] Guest: 2024-03-11 21:35:09.282566583 +0000 UTC Remote: 2024-03-11 21:35:09.176465496 +0000 UTC m=+364.839103648 (delta=106.101087ms)
	I0311 21:35:09.297795   70417 fix.go:200] guest clock delta is within tolerance: 106.101087ms
	I0311 21:35:09.297802   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 24.271590337s
	I0311 21:35:09.297825   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.298067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:09.300989   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301399   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.301422   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301604   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302091   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302291   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302385   70417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:35:09.302433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.302490   70417 ssh_runner.go:195] Run: cat /version.json
	I0311 21:35:09.302515   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.305403   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305572   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305802   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.305831   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305912   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306042   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.306223   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306430   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.306511   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306645   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306772   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:06.528726   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.029055   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.419852   70417 ssh_runner.go:195] Run: systemctl --version
	I0311 21:35:09.427141   70417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:35:09.579321   70417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:35:09.586396   70417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:35:09.586470   70417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:35:09.606617   70417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:35:09.606639   70417 start.go:494] detecting cgroup driver to use...
	I0311 21:35:09.606705   70417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:35:09.627066   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:35:09.646091   70417 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:35:09.646151   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:35:09.662307   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:35:09.679793   70417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:35:09.828827   70417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:35:09.984773   70417 docker.go:233] disabling docker service ...
	I0311 21:35:09.984843   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:35:10.003968   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:35:10.018609   70417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:35:10.174297   70417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:35:10.316762   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:35:10.338008   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:35:10.359320   70417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:35:10.359374   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.371953   70417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:35:10.372008   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.384823   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.397305   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.409521   70417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:35:10.424714   70417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:35:10.438470   70417 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:35:10.438529   70417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:35:10.454436   70417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:35:10.465004   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:10.611379   70417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:35:10.786860   70417 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:35:10.786959   70417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:35:10.792496   70417 start.go:562] Will wait 60s for crictl version
	I0311 21:35:10.792551   70417 ssh_runner.go:195] Run: which crictl
	I0311 21:35:10.797079   70417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:35:10.837010   70417 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:35:10.837086   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.868308   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.900087   70417 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:35:06.414389   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:06.914233   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.414565   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.914773   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.414348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.914743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.413987   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.914698   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.150688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:12.648444   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:10.901304   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:10.904103   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904380   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:10.904407   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904557   70417 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0311 21:35:10.909585   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:10.924163   70417 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:35:10.924311   70417 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:35:10.924408   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:10.969555   70417 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:35:10.969623   70417 ssh_runner.go:195] Run: which lz4
	I0311 21:35:10.974054   70417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:35:10.978776   70417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:35:10.978811   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:35:12.893346   70417 crio.go:444] duration metric: took 1.91931676s to copy over tarball
	I0311 21:35:12.893421   70417 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:35:11.031301   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:13.527896   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:11.414320   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:11.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.414529   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.914476   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.414282   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.914426   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.414521   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.914001   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.414839   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.913921   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.648625   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.148688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:15.772070   70417 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.878627154s)
	I0311 21:35:15.772094   70417 crio.go:451] duration metric: took 2.878719213s to extract the tarball
	I0311 21:35:15.772101   70417 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:35:15.818581   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:15.872635   70417 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:35:15.872658   70417 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:35:15.872667   70417 kubeadm.go:928] updating node { 192.168.61.11 8444 v1.28.4 crio true true} ...
	I0311 21:35:15.872823   70417 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-766430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:35:15.872933   70417 ssh_runner.go:195] Run: crio config
	I0311 21:35:15.928776   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:15.928803   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:15.928818   70417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:35:15.928843   70417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766430 NodeName:default-k8s-diff-port-766430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:35:15.929018   70417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:35:15.929090   70417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:35:15.941853   70417 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:35:15.941908   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:35:15.954936   70417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0311 21:35:15.975236   70417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:35:15.994509   70417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0311 21:35:16.014058   70417 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0311 21:35:16.018972   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:16.035169   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:16.160453   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:35:16.182252   70417 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430 for IP: 192.168.61.11
	I0311 21:35:16.182272   70417 certs.go:194] generating shared ca certs ...
	I0311 21:35:16.182286   70417 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:35:16.182419   70417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:35:16.182465   70417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:35:16.182475   70417 certs.go:256] generating profile certs ...
	I0311 21:35:16.182545   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/client.key
	I0311 21:35:16.182601   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key.2c00376c
	I0311 21:35:16.182635   70417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key
	I0311 21:35:16.182754   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:35:16.182783   70417 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:35:16.182789   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:35:16.182823   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:35:16.182844   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:35:16.182867   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:35:16.182901   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:16.183517   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:35:16.231409   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:35:16.277004   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:35:16.315346   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:35:16.352697   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0311 21:35:16.388570   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:35:16.422830   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:35:16.452562   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 21:35:16.480976   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:35:16.507149   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:35:16.535832   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:35:16.566697   70417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:35:16.587454   70417 ssh_runner.go:195] Run: openssl version
	I0311 21:35:16.593880   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:35:16.608197   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613604   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613673   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.620156   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:35:16.632634   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:35:16.646047   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652530   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652591   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.660480   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:35:16.673572   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:35:16.687161   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692589   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692632   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.705471   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:35:16.718251   70417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:35:16.723979   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:35:16.731335   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:35:16.738485   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:35:16.745489   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:35:16.752295   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:35:16.759251   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:35:16.766128   70417 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:35:16.766237   70417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:35:16.766292   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.806418   70417 cri.go:89] found id: ""
	I0311 21:35:16.806478   70417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:35:16.821434   70417 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:35:16.821455   70417 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:35:16.821462   70417 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:35:16.821514   70417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:35:16.835457   70417 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:35:16.836764   70417 kubeconfig.go:125] found "default-k8s-diff-port-766430" server: "https://192.168.61.11:8444"
	I0311 21:35:16.839163   70417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:35:16.850037   70417 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.11
	I0311 21:35:16.850065   70417 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:35:16.850074   70417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:35:16.850117   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.895532   70417 cri.go:89] found id: ""
	I0311 21:35:16.895612   70417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:35:16.913151   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:35:16.927989   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:35:16.928014   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:35:16.928073   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:35:16.939803   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:35:16.939849   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:35:16.950103   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:35:16.960164   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:35:16.960213   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:35:16.970349   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.980056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:35:16.980098   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.990189   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:35:16.999799   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:35:16.999874   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:35:17.010502   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:35:17.021106   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:17.136170   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.044684   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.296278   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.376702   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.473740   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:35:18.473840   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.974894   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.529099   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.755777   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:20.028341   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:16.414018   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:16.914685   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.414894   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.914319   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.414875   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.914338   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.414496   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.914396   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.414731   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.914149   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.648967   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:22.148024   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:19.474609   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.499907   70417 api_server.go:72] duration metric: took 1.026169594s to wait for apiserver process to appear ...
	I0311 21:35:19.499931   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:35:19.499951   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:19.500566   70417 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0311 21:35:20.000807   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.693958   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:35:22.693991   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:35:22.694006   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.772747   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:22.772792   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.000004   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.004763   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.004805   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.500112   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.507209   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.507236   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.000861   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.006793   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:24.006830   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.500264   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.508242   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:35:24.520230   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:35:24.520255   70417 api_server.go:131] duration metric: took 5.020318338s to wait for apiserver health ...
	I0311 21:35:24.520285   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:24.520291   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:24.522151   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:35:22.029963   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.530052   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:21.414126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:21.914012   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.414680   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.414478   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.914770   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.414370   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.914772   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.413991   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.914516   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.149179   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.647134   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.647725   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.523964   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:35:24.538536   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:35:24.583279   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:35:24.594703   70417 system_pods.go:59] 8 kube-system pods found
	I0311 21:35:24.594730   70417 system_pods.go:61] "coredns-5dd5756b68-pkn9d" [ee4de3f7-1044-4dc9-91dc-d9b23493b0bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:35:24.594737   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [96b9327c-f97d-463f-9d1e-3210b4032aab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:35:24.594751   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [fc650f48-2e28-4219-8571-8b6c43891eb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:35:24.594763   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [c7cc5d40-ad56-4132-ab81-3422ffe1d5b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:35:24.594772   70417 system_pods.go:61] "kube-proxy-cggzr" [f6b7fe4e-7d57-4604-b63d-f9890826b659] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:35:24.594784   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [8a156fec-b2f3-46e8-bf0d-0bf291ef8783] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:35:24.594795   70417 system_pods.go:61] "metrics-server-57f55c9bc5-kxl6n" [ac62700b-a39a-480e-841e-852bf3c66e7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:35:24.594805   70417 system_pods.go:61] "storage-provisioner" [a0b03582-0d90-4a7f-919c-0552046edcb5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:35:24.594821   70417 system_pods.go:74] duration metric: took 11.523907ms to wait for pod list to return data ...
	I0311 21:35:24.594830   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:35:24.606500   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:35:24.606529   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:35:24.606546   70417 node_conditions.go:105] duration metric: took 11.711241ms to run NodePressure ...
	I0311 21:35:24.606565   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:24.893361   70417 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899200   70417 kubeadm.go:733] kubelet initialised
	I0311 21:35:24.899225   70417 kubeadm.go:734] duration metric: took 5.837351ms waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899235   70417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:35:24.905858   70417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:26.912640   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.916566   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:27.029381   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:29.529565   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.414267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:26.914876   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.414469   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.914513   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.414924   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.914126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.414526   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.914039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.414305   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.914438   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.147527   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:33.147694   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.413246   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.912878   70417 pod_ready.go:92] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:31.912899   70417 pod_ready.go:81] duration metric: took 7.007017714s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:31.912908   70417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:33.977091   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:32.029295   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:34.529021   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.414610   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.914472   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.414158   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.914169   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.414745   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.914820   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.414071   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.914228   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.414135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.914695   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.148058   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:37.648200   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.422565   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.921304   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.921328   70417 pod_ready.go:81] duration metric: took 5.008411943s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.921340   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927268   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.927284   70417 pod_ready.go:81] duration metric: took 5.936969ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927292   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932540   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.932563   70417 pod_ready.go:81] duration metric: took 5.264737ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932575   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937456   70417 pod_ready.go:92] pod "kube-proxy-cggzr" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.937473   70417 pod_ready.go:81] duration metric: took 4.892276ms for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937480   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942372   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.942390   70417 pod_ready.go:81] duration metric: took 4.902792ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942401   70417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:38.949452   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.531316   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:39.030491   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.414435   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:36.914157   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.414539   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.914811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.414070   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.914303   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.413935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.914135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.414569   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.914106   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.147355   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.148353   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:40.950204   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.950335   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.528874   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:43.530140   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.414404   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:41.914323   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.414215   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.914566   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.414671   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.914658   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.414703   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.913966   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.414045   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.914260   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.648282   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.148247   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:45.449963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.451576   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.029164   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:48.529137   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:46.914821   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.414210   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.914008   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.413884   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.914160   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.414877   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.914379   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.414293   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.913867   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.148585   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.648372   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:49.949667   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.950874   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.953067   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:50.529616   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.030586   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.414582   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:51.914453   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.414668   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.914816   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.414768   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.914592   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.414743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.914307   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.414000   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.914553   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:55.914636   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:55.957434   70908 cri.go:89] found id: ""
	I0311 21:35:55.957459   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.957470   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:55.957477   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:55.957545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:55.995255   70908 cri.go:89] found id: ""
	I0311 21:35:55.995279   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.995290   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:55.995305   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:55.995364   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:56.038893   70908 cri.go:89] found id: ""
	I0311 21:35:56.038916   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.038926   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:56.038933   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:56.038990   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:54.147165   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.148641   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.647841   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.451057   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.950421   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:55.528922   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.029209   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:00.029912   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.081497   70908 cri.go:89] found id: ""
	I0311 21:35:56.081517   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.081528   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:56.081534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:56.081591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:56.120047   70908 cri.go:89] found id: ""
	I0311 21:35:56.120071   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.120079   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:56.120084   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:56.120156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:56.157350   70908 cri.go:89] found id: ""
	I0311 21:35:56.157370   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.157377   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:56.157382   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:56.157433   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:56.198324   70908 cri.go:89] found id: ""
	I0311 21:35:56.198354   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.198374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:56.198381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:56.198437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:56.236579   70908 cri.go:89] found id: ""
	I0311 21:35:56.236608   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.236619   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:56.236691   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:56.236712   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:56.377789   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:35:56.377809   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:56.377825   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:56.449765   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:56.449807   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:56.502417   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:56.502448   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:56.557205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:56.557241   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.073411   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:59.088205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:59.088287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:59.126458   70908 cri.go:89] found id: ""
	I0311 21:35:59.126486   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.126494   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:59.126499   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:59.126555   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:59.197887   70908 cri.go:89] found id: ""
	I0311 21:35:59.197911   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.197919   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:59.197924   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:59.197967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:59.239523   70908 cri.go:89] found id: ""
	I0311 21:35:59.239552   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.239562   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:59.239570   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:59.239642   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:59.280903   70908 cri.go:89] found id: ""
	I0311 21:35:59.280930   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.280940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:59.280947   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:59.281024   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:59.320218   70908 cri.go:89] found id: ""
	I0311 21:35:59.320242   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.320254   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:59.320260   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:59.320314   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:59.361235   70908 cri.go:89] found id: ""
	I0311 21:35:59.361265   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.361276   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:59.361283   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:59.361352   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:59.409477   70908 cri.go:89] found id: ""
	I0311 21:35:59.409503   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.409514   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:59.409522   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:59.409568   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:59.454704   70908 cri.go:89] found id: ""
	I0311 21:35:59.454728   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.454739   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:59.454748   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:59.454767   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:59.525839   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:59.525864   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:59.569577   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:59.569606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:59.628402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:59.628437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.647181   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:59.647208   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:59.731300   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:00.650515   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.146560   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:01.449702   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.950341   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.030569   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:04.529453   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.232458   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:02.246948   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:02.247025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:02.290561   70908 cri.go:89] found id: ""
	I0311 21:36:02.290588   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.290599   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:02.290605   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:02.290659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:02.333788   70908 cri.go:89] found id: ""
	I0311 21:36:02.333814   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.333821   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:02.333826   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:02.333877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:02.375774   70908 cri.go:89] found id: ""
	I0311 21:36:02.375798   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.375806   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:02.375812   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:02.375862   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:02.414741   70908 cri.go:89] found id: ""
	I0311 21:36:02.414781   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.414803   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:02.414810   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:02.414875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:02.456637   70908 cri.go:89] found id: ""
	I0311 21:36:02.456660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.456670   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:02.456677   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:02.456759   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:02.494633   70908 cri.go:89] found id: ""
	I0311 21:36:02.494660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.494670   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:02.494678   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:02.494738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:02.536187   70908 cri.go:89] found id: ""
	I0311 21:36:02.536212   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.536223   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:02.536230   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:02.536291   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:02.574933   70908 cri.go:89] found id: ""
	I0311 21:36:02.574962   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.574973   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:02.574985   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:02.575001   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:02.656610   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:02.656637   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:02.656653   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:02.730514   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:02.730548   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:02.776009   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:02.776041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:02.829792   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:02.829826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.345568   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:05.360082   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:05.360164   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:05.406106   70908 cri.go:89] found id: ""
	I0311 21:36:05.406131   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.406141   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:05.406147   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:05.406203   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:05.449584   70908 cri.go:89] found id: ""
	I0311 21:36:05.449608   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.449617   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:05.449624   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:05.449680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:05.493869   70908 cri.go:89] found id: ""
	I0311 21:36:05.493898   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.493912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:05.493928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:05.493994   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:05.563506   70908 cri.go:89] found id: ""
	I0311 21:36:05.563532   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.563542   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:05.563549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:05.563600   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:05.630140   70908 cri.go:89] found id: ""
	I0311 21:36:05.630165   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.630172   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:05.630177   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:05.630230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:05.675584   70908 cri.go:89] found id: ""
	I0311 21:36:05.675612   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.675623   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:05.675631   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:05.675689   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:05.720521   70908 cri.go:89] found id: ""
	I0311 21:36:05.720548   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.720557   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:05.720563   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:05.720615   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:05.759323   70908 cri.go:89] found id: ""
	I0311 21:36:05.759351   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.759359   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:05.759367   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:05.759379   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:05.801024   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:05.801050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:05.856330   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:05.856356   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.871299   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:05.871324   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:05.950218   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:05.950245   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:05.950259   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:05.148227   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.647389   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:05.950833   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.449548   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.028964   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:09.029396   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.535502   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:08.552152   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:08.552220   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:08.596602   70908 cri.go:89] found id: ""
	I0311 21:36:08.596707   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.596731   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:08.596755   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:08.596820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:08.641091   70908 cri.go:89] found id: ""
	I0311 21:36:08.641119   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.641130   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:08.641137   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:08.641198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:08.684466   70908 cri.go:89] found id: ""
	I0311 21:36:08.684494   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.684503   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:08.684510   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:08.684570   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:08.730899   70908 cri.go:89] found id: ""
	I0311 21:36:08.730924   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.730931   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:08.730937   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:08.730997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:08.775293   70908 cri.go:89] found id: ""
	I0311 21:36:08.775317   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.775324   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:08.775330   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:08.775387   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:08.816098   70908 cri.go:89] found id: ""
	I0311 21:36:08.816126   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.816137   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:08.816144   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:08.816207   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:08.857413   70908 cri.go:89] found id: ""
	I0311 21:36:08.857449   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.857460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:08.857476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:08.857541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:08.898252   70908 cri.go:89] found id: ""
	I0311 21:36:08.898283   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.898293   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:08.898302   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:08.898313   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:08.955162   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:08.955188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:08.970234   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:08.970258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:09.055025   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:09.055043   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:09.055055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:09.140345   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:09.140376   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:10.148323   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.647037   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:10.450796   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.450839   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.529842   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.029706   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.681542   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:11.697407   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:11.697481   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:11.740239   70908 cri.go:89] found id: ""
	I0311 21:36:11.740264   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.740274   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:11.740280   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:11.740336   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:11.777625   70908 cri.go:89] found id: ""
	I0311 21:36:11.777655   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.777667   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:11.777674   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:11.777745   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:11.817202   70908 cri.go:89] found id: ""
	I0311 21:36:11.817226   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.817233   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:11.817239   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:11.817306   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:11.858912   70908 cri.go:89] found id: ""
	I0311 21:36:11.858933   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.858940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:11.858945   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:11.858998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:11.897841   70908 cri.go:89] found id: ""
	I0311 21:36:11.897876   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.897887   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:11.897895   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:11.897955   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:11.936181   70908 cri.go:89] found id: ""
	I0311 21:36:11.936207   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.936218   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:11.936226   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:11.936293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:11.981882   70908 cri.go:89] found id: ""
	I0311 21:36:11.981905   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.981915   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:11.981922   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:11.981982   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:12.022270   70908 cri.go:89] found id: ""
	I0311 21:36:12.022298   70908 logs.go:276] 0 containers: []
	W0311 21:36:12.022309   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:12.022320   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:12.022333   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:12.074640   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:12.074668   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:12.089854   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:12.089879   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:12.179578   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:12.179595   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:12.179606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:12.263249   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:12.263285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:14.811547   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:14.827075   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:14.827175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:14.870512   70908 cri.go:89] found id: ""
	I0311 21:36:14.870544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.870555   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:14.870563   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:14.870625   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:14.908521   70908 cri.go:89] found id: ""
	I0311 21:36:14.908544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.908553   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:14.908558   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:14.908607   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:14.951702   70908 cri.go:89] found id: ""
	I0311 21:36:14.951729   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.951739   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:14.951746   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:14.951805   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:14.992590   70908 cri.go:89] found id: ""
	I0311 21:36:14.992618   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.992630   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:14.992638   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:14.992698   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:15.034535   70908 cri.go:89] found id: ""
	I0311 21:36:15.034556   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.034563   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:15.034569   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:15.034614   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:15.077175   70908 cri.go:89] found id: ""
	I0311 21:36:15.077200   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.077210   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:15.077218   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:15.077283   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:15.121500   70908 cri.go:89] found id: ""
	I0311 21:36:15.121530   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.121541   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:15.121549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:15.121655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:15.162712   70908 cri.go:89] found id: ""
	I0311 21:36:15.162738   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.162748   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:15.162757   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:15.162776   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:15.241469   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:15.241488   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:15.241499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:15.322257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:15.322291   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:15.368258   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:15.368285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:15.427131   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:15.427163   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:14.648776   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.148710   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.452948   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.949085   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.950111   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.030409   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.529122   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.944348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:17.958629   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:17.958704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:17.995869   70908 cri.go:89] found id: ""
	I0311 21:36:17.995895   70908 logs.go:276] 0 containers: []
	W0311 21:36:17.995904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:17.995914   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:17.995976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:18.032273   70908 cri.go:89] found id: ""
	I0311 21:36:18.032300   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.032308   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:18.032313   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:18.032361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:18.072497   70908 cri.go:89] found id: ""
	I0311 21:36:18.072519   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.072526   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:18.072532   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:18.072578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:18.110091   70908 cri.go:89] found id: ""
	I0311 21:36:18.110119   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.110129   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:18.110136   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:18.110199   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:18.152217   70908 cri.go:89] found id: ""
	I0311 21:36:18.152261   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.152272   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:18.152280   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:18.152347   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:18.193957   70908 cri.go:89] found id: ""
	I0311 21:36:18.193989   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.194000   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:18.194008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:18.194086   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:18.231828   70908 cri.go:89] found id: ""
	I0311 21:36:18.231861   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.231873   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:18.231880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:18.231939   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:18.271862   70908 cri.go:89] found id: ""
	I0311 21:36:18.271896   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.271907   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:18.271917   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:18.271933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:18.325405   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:18.325440   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:18.344560   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:18.344593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:18.425051   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:18.425075   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:18.425093   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:18.513247   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:18.513287   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:19.646758   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.647702   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.649318   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.450692   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.950088   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.028812   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.029828   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.060499   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:21.076648   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:21.076716   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:21.117270   70908 cri.go:89] found id: ""
	I0311 21:36:21.117298   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.117309   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:21.117317   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:21.117388   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:21.159005   70908 cri.go:89] found id: ""
	I0311 21:36:21.159045   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.159056   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:21.159063   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:21.159122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:21.196576   70908 cri.go:89] found id: ""
	I0311 21:36:21.196599   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.196609   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:21.196617   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:21.196677   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:21.237689   70908 cri.go:89] found id: ""
	I0311 21:36:21.237718   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.237729   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:21.237734   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:21.237783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:21.280662   70908 cri.go:89] found id: ""
	I0311 21:36:21.280696   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.280707   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:21.280714   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:21.280795   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:21.321475   70908 cri.go:89] found id: ""
	I0311 21:36:21.321501   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.321511   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:21.321518   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:21.321581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:21.365186   70908 cri.go:89] found id: ""
	I0311 21:36:21.365209   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.365216   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:21.365221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:21.365276   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:21.408678   70908 cri.go:89] found id: ""
	I0311 21:36:21.408713   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.408725   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:21.408754   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:21.408771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:21.466635   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:21.466663   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:21.482596   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:21.482622   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:21.556750   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:21.556769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:21.556780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:21.643095   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:21.643126   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.195112   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:24.208829   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:24.208895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:24.245956   70908 cri.go:89] found id: ""
	I0311 21:36:24.245981   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.245989   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:24.245995   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:24.246053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:24.289740   70908 cri.go:89] found id: ""
	I0311 21:36:24.289766   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.289778   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:24.289784   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:24.289846   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:24.336911   70908 cri.go:89] found id: ""
	I0311 21:36:24.336963   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.336977   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:24.336986   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:24.337057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:24.381715   70908 cri.go:89] found id: ""
	I0311 21:36:24.381739   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.381753   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:24.381761   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:24.381817   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:24.423759   70908 cri.go:89] found id: ""
	I0311 21:36:24.423787   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.423797   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:24.423805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:24.423882   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:24.468903   70908 cri.go:89] found id: ""
	I0311 21:36:24.468931   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.468946   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:24.468954   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:24.469013   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:24.509602   70908 cri.go:89] found id: ""
	I0311 21:36:24.509629   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.509639   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:24.509646   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:24.509706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:24.551483   70908 cri.go:89] found id: ""
	I0311 21:36:24.551511   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.551522   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:24.551532   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:24.551545   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:24.567123   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:24.567154   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:24.644215   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:24.644247   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:24.644262   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:24.726438   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:24.726469   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.779567   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:24.779596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:26.146823   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.148291   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:26.450637   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.949850   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:25.528542   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.529375   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:29.529701   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.337785   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:27.352504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:27.352578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:27.395787   70908 cri.go:89] found id: ""
	I0311 21:36:27.395809   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.395817   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:27.395823   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:27.395869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:27.441800   70908 cri.go:89] found id: ""
	I0311 21:36:27.441826   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.441834   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:27.441839   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:27.441893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:27.481761   70908 cri.go:89] found id: ""
	I0311 21:36:27.481791   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.481802   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:27.481809   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:27.481868   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:27.526981   70908 cri.go:89] found id: ""
	I0311 21:36:27.527011   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.527029   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:27.527037   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:27.527130   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:27.566569   70908 cri.go:89] found id: ""
	I0311 21:36:27.566602   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.566614   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:27.566622   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:27.566682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:27.607434   70908 cri.go:89] found id: ""
	I0311 21:36:27.607456   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.607464   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:27.607469   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:27.607529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:27.652648   70908 cri.go:89] found id: ""
	I0311 21:36:27.652674   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.652681   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:27.652686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:27.652756   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:27.691105   70908 cri.go:89] found id: ""
	I0311 21:36:27.691136   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.691148   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:27.691158   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:27.691173   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:27.706451   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:27.706477   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:27.788935   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:27.788959   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:27.788975   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:27.875721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:27.875758   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:27.927920   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:27.927951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.487728   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:30.503425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:30.503508   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:30.550846   70908 cri.go:89] found id: ""
	I0311 21:36:30.550868   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.550875   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:30.550881   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:30.550928   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:30.586886   70908 cri.go:89] found id: ""
	I0311 21:36:30.586915   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.586925   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:30.586934   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:30.586991   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:30.627849   70908 cri.go:89] found id: ""
	I0311 21:36:30.627884   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.627895   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:30.627902   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:30.627965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:30.669188   70908 cri.go:89] found id: ""
	I0311 21:36:30.669209   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.669216   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:30.669222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:30.669266   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:30.711676   70908 cri.go:89] found id: ""
	I0311 21:36:30.711697   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.711705   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:30.711710   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:30.711758   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:30.754218   70908 cri.go:89] found id: ""
	I0311 21:36:30.754240   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.754248   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:30.754253   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:30.754299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:30.791224   70908 cri.go:89] found id: ""
	I0311 21:36:30.791255   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.791263   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:30.791269   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:30.791328   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:30.831263   70908 cri.go:89] found id: ""
	I0311 21:36:30.831291   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.831301   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:30.831311   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:30.831326   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:30.876574   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:30.876600   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.928483   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:30.928509   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:30.944642   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:30.944665   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:31.026406   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:31.026428   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:31.026444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:30.648859   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.147907   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:30.952483   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.451714   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:32.028484   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:34.028948   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.611104   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:33.625644   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:33.625706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:33.664787   70908 cri.go:89] found id: ""
	I0311 21:36:33.664816   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.664825   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:33.664830   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:33.664894   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:33.704636   70908 cri.go:89] found id: ""
	I0311 21:36:33.704659   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.704666   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:33.704672   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:33.704717   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:33.744797   70908 cri.go:89] found id: ""
	I0311 21:36:33.744837   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.744848   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:33.744855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:33.744917   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:33.787435   70908 cri.go:89] found id: ""
	I0311 21:36:33.787464   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.787474   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:33.787482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:33.787541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:33.826578   70908 cri.go:89] found id: ""
	I0311 21:36:33.826606   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.826617   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:33.826624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:33.826684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:33.864854   70908 cri.go:89] found id: ""
	I0311 21:36:33.864875   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.864882   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:33.864887   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:33.864934   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:33.905366   70908 cri.go:89] found id: ""
	I0311 21:36:33.905397   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.905409   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:33.905416   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:33.905477   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:33.950196   70908 cri.go:89] found id: ""
	I0311 21:36:33.950222   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.950232   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:33.950243   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:33.950258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:34.001016   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:34.001049   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:34.059102   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:34.059131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:34.075879   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:34.075908   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:34.177114   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:34.177138   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:34.177161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:35.647611   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.147941   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:35.950147   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.449090   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.030072   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.527952   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.756459   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:36.772781   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:36.772867   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:36.820076   70908 cri.go:89] found id: ""
	I0311 21:36:36.820103   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.820111   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:36.820118   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:36.820169   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:36.859279   70908 cri.go:89] found id: ""
	I0311 21:36:36.859306   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.859317   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:36.859324   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:36.859383   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:36.899669   70908 cri.go:89] found id: ""
	I0311 21:36:36.899694   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.899705   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:36.899712   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:36.899770   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:36.938826   70908 cri.go:89] found id: ""
	I0311 21:36:36.938853   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.938864   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:36.938872   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:36.938957   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:36.976659   70908 cri.go:89] found id: ""
	I0311 21:36:36.976685   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.976693   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:36.976703   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:36.976772   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:37.015439   70908 cri.go:89] found id: ""
	I0311 21:36:37.015462   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.015469   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:37.015474   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:37.015519   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:37.057469   70908 cri.go:89] found id: ""
	I0311 21:36:37.057496   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.057507   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:37.057514   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:37.057579   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:37.106287   70908 cri.go:89] found id: ""
	I0311 21:36:37.106316   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.106325   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:37.106335   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:37.106352   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:37.122333   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:37.122367   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:37.197708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:37.197731   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:37.197742   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:37.281911   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:37.281944   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:37.335978   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:37.336011   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:39.891583   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:39.914741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:39.914823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:39.955751   70908 cri.go:89] found id: ""
	I0311 21:36:39.955773   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.955781   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:39.955786   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:39.955837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:39.997604   70908 cri.go:89] found id: ""
	I0311 21:36:39.997632   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.997642   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:39.997649   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:39.997711   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:40.039138   70908 cri.go:89] found id: ""
	I0311 21:36:40.039168   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.039178   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:40.039186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:40.039230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:40.079906   70908 cri.go:89] found id: ""
	I0311 21:36:40.079934   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.079945   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:40.079952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:40.080017   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:40.124116   70908 cri.go:89] found id: ""
	I0311 21:36:40.124141   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.124152   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:40.124159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:40.124221   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:40.165078   70908 cri.go:89] found id: ""
	I0311 21:36:40.165099   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.165108   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:40.165113   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:40.165158   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:40.203928   70908 cri.go:89] found id: ""
	I0311 21:36:40.203954   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.203962   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:40.203971   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:40.204018   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:40.244755   70908 cri.go:89] found id: ""
	I0311 21:36:40.244783   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.244793   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:40.244803   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:40.244819   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:40.302090   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:40.302125   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:40.318071   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:40.318097   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:40.405336   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:40.405363   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:40.405378   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:40.493262   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:40.493298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:40.148095   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.651483   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.449200   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.450259   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.528526   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.533619   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:45.029285   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:43.052419   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:43.068300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:43.068378   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:43.109665   70908 cri.go:89] found id: ""
	I0311 21:36:43.109701   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.109717   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:43.109725   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:43.109789   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:43.152233   70908 cri.go:89] found id: ""
	I0311 21:36:43.152253   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.152260   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:43.152265   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:43.152311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:43.194969   70908 cri.go:89] found id: ""
	I0311 21:36:43.194995   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.195002   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:43.195008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:43.195056   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:43.234555   70908 cri.go:89] found id: ""
	I0311 21:36:43.234581   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.234592   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:43.234597   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:43.234651   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:43.275188   70908 cri.go:89] found id: ""
	I0311 21:36:43.275214   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.275224   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:43.275232   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:43.275287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:43.314481   70908 cri.go:89] found id: ""
	I0311 21:36:43.314507   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.314515   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:43.314521   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:43.314580   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:43.353287   70908 cri.go:89] found id: ""
	I0311 21:36:43.353317   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.353328   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:43.353336   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:43.353395   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:43.396112   70908 cri.go:89] found id: ""
	I0311 21:36:43.396138   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.396150   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:43.396160   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:43.396175   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:43.456116   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:43.456143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:43.472992   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:43.473023   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:43.558281   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:43.558311   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:43.558327   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:43.641849   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:43.641885   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:45.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.147574   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:44.954864   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.450806   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.029669   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.529505   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:46.187444   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:46.202848   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:46.202911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:46.244843   70908 cri.go:89] found id: ""
	I0311 21:36:46.244872   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.244880   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:46.244886   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:46.244933   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:46.297789   70908 cri.go:89] found id: ""
	I0311 21:36:46.297820   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.297831   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:46.297838   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:46.297903   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:46.353104   70908 cri.go:89] found id: ""
	I0311 21:36:46.353127   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.353134   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:46.353140   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:46.353211   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:46.426767   70908 cri.go:89] found id: ""
	I0311 21:36:46.426792   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.426799   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:46.426804   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:46.426858   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:46.469850   70908 cri.go:89] found id: ""
	I0311 21:36:46.469881   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.469891   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:46.469899   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:46.469960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:46.510692   70908 cri.go:89] found id: ""
	I0311 21:36:46.510718   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.510726   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:46.510732   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:46.510787   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:46.554445   70908 cri.go:89] found id: ""
	I0311 21:36:46.554468   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.554475   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:46.554482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:46.554527   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:46.592417   70908 cri.go:89] found id: ""
	I0311 21:36:46.592448   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.592458   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:46.592467   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:46.592480   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:46.607106   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:46.607146   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:46.691556   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:46.691575   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:46.691587   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:46.772468   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:46.772503   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:46.814478   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:46.814512   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.368451   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:49.383504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:49.383573   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:49.427392   70908 cri.go:89] found id: ""
	I0311 21:36:49.427415   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.427426   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:49.427434   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:49.427493   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:49.469022   70908 cri.go:89] found id: ""
	I0311 21:36:49.469044   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.469052   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:49.469059   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:49.469106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:49.510755   70908 cri.go:89] found id: ""
	I0311 21:36:49.510781   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.510792   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:49.510800   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:49.510886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:49.556594   70908 cri.go:89] found id: ""
	I0311 21:36:49.556631   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.556642   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:49.556649   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:49.556710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:49.597035   70908 cri.go:89] found id: ""
	I0311 21:36:49.597059   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.597067   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:49.597072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:49.597138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:49.642947   70908 cri.go:89] found id: ""
	I0311 21:36:49.642975   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.642985   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:49.642993   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:49.643051   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:49.681401   70908 cri.go:89] found id: ""
	I0311 21:36:49.681423   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.681430   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:49.681435   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:49.681478   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:49.718498   70908 cri.go:89] found id: ""
	I0311 21:36:49.718529   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.718539   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:49.718549   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:49.718563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:49.764483   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:49.764515   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.821261   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:49.821293   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:49.837110   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:49.837135   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:49.918507   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:49.918529   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:49.918541   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:49.648198   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.146837   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.450941   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:51.950760   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.030288   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.528831   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.500354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:52.516722   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:52.516811   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:52.563312   70908 cri.go:89] found id: ""
	I0311 21:36:52.563340   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.563354   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:52.563362   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:52.563421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:52.603545   70908 cri.go:89] found id: ""
	I0311 21:36:52.603572   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.603581   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:52.603588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:52.603657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:52.645624   70908 cri.go:89] found id: ""
	I0311 21:36:52.645648   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.645658   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:52.645665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:52.645722   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:52.693335   70908 cri.go:89] found id: ""
	I0311 21:36:52.693363   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.693373   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:52.693380   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:52.693437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:52.740272   70908 cri.go:89] found id: ""
	I0311 21:36:52.740310   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.740331   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:52.740341   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:52.740398   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:52.786241   70908 cri.go:89] found id: ""
	I0311 21:36:52.786276   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.786285   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:52.786291   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:52.786355   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:52.825013   70908 cri.go:89] found id: ""
	I0311 21:36:52.825042   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.825053   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:52.825061   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:52.825117   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:52.862867   70908 cri.go:89] found id: ""
	I0311 21:36:52.862892   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.862901   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:52.862908   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:52.862922   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:52.917005   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:52.917036   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:52.932086   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:52.932112   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:53.012379   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:53.012402   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:53.012413   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:53.096881   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:53.096913   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:55.640142   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:55.656664   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:55.656749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:55.697962   70908 cri.go:89] found id: ""
	I0311 21:36:55.697992   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.698000   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:55.698005   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:55.698059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:55.741888   70908 cri.go:89] found id: ""
	I0311 21:36:55.741910   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.741917   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:55.741921   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:55.741965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:55.779352   70908 cri.go:89] found id: ""
	I0311 21:36:55.779372   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.779381   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:55.779386   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:55.779430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:55.819496   70908 cri.go:89] found id: ""
	I0311 21:36:55.819530   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.819541   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:55.819549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:55.819612   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:55.859384   70908 cri.go:89] found id: ""
	I0311 21:36:55.859412   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.859419   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:55.859424   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:55.859473   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:55.899415   70908 cri.go:89] found id: ""
	I0311 21:36:55.899438   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.899445   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:55.899450   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:55.899496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:55.938595   70908 cri.go:89] found id: ""
	I0311 21:36:55.938625   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.938637   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:55.938645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:55.938710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:55.980064   70908 cri.go:89] found id: ""
	I0311 21:36:55.980089   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.980096   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:55.980103   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:55.980115   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:55.996222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:55.996297   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:36:54.147743   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.150270   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.648829   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.450767   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.949091   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.950443   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.529184   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:59.029323   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	W0311 21:36:56.081046   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:56.081074   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:56.081090   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:56.167748   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:56.167773   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:56.221118   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:56.221150   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:58.772403   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:58.789349   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:58.789421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:58.829945   70908 cri.go:89] found id: ""
	I0311 21:36:58.829974   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.829985   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:58.829993   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:58.830059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:58.877190   70908 cri.go:89] found id: ""
	I0311 21:36:58.877214   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.877224   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:58.877231   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:58.877295   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:58.920086   70908 cri.go:89] found id: ""
	I0311 21:36:58.920113   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.920122   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:58.920128   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:58.920189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:58.956864   70908 cri.go:89] found id: ""
	I0311 21:36:58.956890   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.956900   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:58.956907   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:58.956967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:58.999363   70908 cri.go:89] found id: ""
	I0311 21:36:58.999390   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.999400   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:58.999408   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:58.999469   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:59.041759   70908 cri.go:89] found id: ""
	I0311 21:36:59.041787   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.041797   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:59.041803   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:59.041850   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:59.084378   70908 cri.go:89] found id: ""
	I0311 21:36:59.084406   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.084417   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:59.084425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:59.084479   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:59.124105   70908 cri.go:89] found id: ""
	I0311 21:36:59.124151   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.124163   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:59.124173   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:59.124188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:59.202060   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:59.202083   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:59.202098   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:59.284025   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:59.284060   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:59.327926   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:59.327951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:59.382505   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:59.382533   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:01.147260   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.149020   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.450230   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.949834   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.529173   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.532427   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.900084   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:01.914495   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:01.914552   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:01.956887   70908 cri.go:89] found id: ""
	I0311 21:37:01.956912   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.956922   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:01.956929   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:01.956986   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:01.995358   70908 cri.go:89] found id: ""
	I0311 21:37:01.995385   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.995394   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:01.995399   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:01.995448   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:02.033949   70908 cri.go:89] found id: ""
	I0311 21:37:02.033974   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.033984   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:02.033991   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:02.034049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:02.074348   70908 cri.go:89] found id: ""
	I0311 21:37:02.074372   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.074382   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:02.074390   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:02.074449   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:02.112456   70908 cri.go:89] found id: ""
	I0311 21:37:02.112479   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.112486   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:02.112491   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:02.112554   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:02.155102   70908 cri.go:89] found id: ""
	I0311 21:37:02.155130   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.155138   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:02.155149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:02.155205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:02.191359   70908 cri.go:89] found id: ""
	I0311 21:37:02.191386   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.191393   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:02.191399   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:02.191450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:02.236178   70908 cri.go:89] found id: ""
	I0311 21:37:02.236203   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.236211   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:02.236220   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:02.236231   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:02.285794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:02.285818   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:02.342348   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:02.342387   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:02.357230   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:02.357257   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:02.431044   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:02.431064   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:02.431076   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.019473   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:05.035841   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:05.035901   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:05.082013   70908 cri.go:89] found id: ""
	I0311 21:37:05.082034   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.082041   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:05.082046   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:05.082091   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:05.126236   70908 cri.go:89] found id: ""
	I0311 21:37:05.126257   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.126265   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:05.126270   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:05.126311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:05.170573   70908 cri.go:89] found id: ""
	I0311 21:37:05.170601   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.170608   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:05.170614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:05.170658   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:05.213921   70908 cri.go:89] found id: ""
	I0311 21:37:05.213948   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.213958   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:05.213965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:05.214025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:05.261178   70908 cri.go:89] found id: ""
	I0311 21:37:05.261206   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.261213   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:05.261221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:05.261273   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:05.306007   70908 cri.go:89] found id: ""
	I0311 21:37:05.306037   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.306045   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:05.306051   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:05.306106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:05.346653   70908 cri.go:89] found id: ""
	I0311 21:37:05.346679   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.346688   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:05.346694   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:05.346752   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:05.384587   70908 cri.go:89] found id: ""
	I0311 21:37:05.384626   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.384637   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:05.384648   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:05.384664   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:05.440676   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:05.440709   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:05.456989   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:05.457018   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:05.553900   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:05.553932   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:05.553947   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.633270   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:05.633300   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:05.647077   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.146975   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.449502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.450008   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.028642   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.529826   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.181935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:08.198179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:08.198251   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:08.236484   70908 cri.go:89] found id: ""
	I0311 21:37:08.236506   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.236516   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:08.236524   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:08.236578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:08.277701   70908 cri.go:89] found id: ""
	I0311 21:37:08.277731   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.277739   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:08.277745   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:08.277804   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:08.319559   70908 cri.go:89] found id: ""
	I0311 21:37:08.319585   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.319596   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:08.319604   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:08.319666   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:08.359752   70908 cri.go:89] found id: ""
	I0311 21:37:08.359777   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.359785   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:08.359791   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:08.359849   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:08.397432   70908 cri.go:89] found id: ""
	I0311 21:37:08.397453   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.397460   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:08.397465   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:08.397511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:08.438708   70908 cri.go:89] found id: ""
	I0311 21:37:08.438732   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.438742   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:08.438749   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:08.438807   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:08.479511   70908 cri.go:89] found id: ""
	I0311 21:37:08.479533   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.479560   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:08.479566   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:08.479620   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:08.521634   70908 cri.go:89] found id: ""
	I0311 21:37:08.521659   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.521670   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:08.521680   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:08.521693   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:08.577033   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:08.577065   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:08.592006   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:08.592030   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:08.680862   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:08.680903   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:08.680919   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:08.764991   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:08.765037   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:10.147819   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.648352   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:10.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.949571   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.028245   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:13.028689   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:15.034232   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.313168   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:11.326808   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:11.326876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:11.364223   70908 cri.go:89] found id: ""
	I0311 21:37:11.364246   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.364254   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:11.364259   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:11.364311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:11.401361   70908 cri.go:89] found id: ""
	I0311 21:37:11.401391   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.401402   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:11.401409   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:11.401459   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:11.441927   70908 cri.go:89] found id: ""
	I0311 21:37:11.441950   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.441957   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:11.441962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:11.442015   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:11.480804   70908 cri.go:89] found id: ""
	I0311 21:37:11.480836   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.480847   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:11.480855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:11.480913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:11.520135   70908 cri.go:89] found id: ""
	I0311 21:37:11.520166   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.520177   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:11.520193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:11.520255   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:11.559214   70908 cri.go:89] found id: ""
	I0311 21:37:11.559244   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.559255   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:11.559263   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:11.559322   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:11.597346   70908 cri.go:89] found id: ""
	I0311 21:37:11.597374   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.597383   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:11.597391   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:11.597452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:11.646095   70908 cri.go:89] found id: ""
	I0311 21:37:11.646118   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.646127   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:11.646137   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:11.646167   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:11.691813   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:11.691844   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:11.745270   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:11.745303   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:11.761107   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:11.761131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:11.841033   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:11.841059   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:11.841074   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:14.431709   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:14.447064   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:14.447131   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:14.493094   70908 cri.go:89] found id: ""
	I0311 21:37:14.493132   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.493140   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:14.493146   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:14.493195   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:14.537391   70908 cri.go:89] found id: ""
	I0311 21:37:14.537415   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.537423   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:14.537428   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:14.537487   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:14.576284   70908 cri.go:89] found id: ""
	I0311 21:37:14.576306   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.576313   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:14.576319   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:14.576375   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:14.627057   70908 cri.go:89] found id: ""
	I0311 21:37:14.627086   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.627097   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:14.627105   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:14.627163   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:14.669204   70908 cri.go:89] found id: ""
	I0311 21:37:14.669226   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.669233   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:14.669238   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:14.669293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:14.708787   70908 cri.go:89] found id: ""
	I0311 21:37:14.708812   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.708820   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:14.708826   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:14.708892   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:14.749795   70908 cri.go:89] found id: ""
	I0311 21:37:14.749819   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.749828   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:14.749835   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:14.749893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:14.794871   70908 cri.go:89] found id: ""
	I0311 21:37:14.794900   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.794911   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:14.794922   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:14.794936   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:14.850022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:14.850050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:14.866589   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:14.866618   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:14.968887   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:14.968906   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:14.968921   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:15.047376   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:15.047404   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:14.648528   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:16.649275   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:18.649842   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:14.951387   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.451239   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.529411   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:20.030012   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.599834   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:17.613610   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:17.613665   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:17.655340   70908 cri.go:89] found id: ""
	I0311 21:37:17.655361   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.655369   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:17.655374   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:17.655416   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:17.695071   70908 cri.go:89] found id: ""
	I0311 21:37:17.695103   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.695114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:17.695121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:17.695178   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:17.731914   70908 cri.go:89] found id: ""
	I0311 21:37:17.731938   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.731946   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:17.731952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:17.732012   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:17.768198   70908 cri.go:89] found id: ""
	I0311 21:37:17.768224   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.768236   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:17.768242   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:17.768301   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:17.802881   70908 cri.go:89] found id: ""
	I0311 21:37:17.802909   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.802920   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:17.802928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:17.802983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:17.841660   70908 cri.go:89] found id: ""
	I0311 21:37:17.841684   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.841692   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:17.841698   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:17.841749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:17.880154   70908 cri.go:89] found id: ""
	I0311 21:37:17.880183   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.880196   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:17.880205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:17.880260   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:17.919797   70908 cri.go:89] found id: ""
	I0311 21:37:17.919822   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.919829   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:17.919837   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:17.919847   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:17.976607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:17.976636   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:17.993313   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:17.993339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:18.069928   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:18.069956   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:18.069973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:18.152257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:18.152285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:20.706553   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:20.721148   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:20.721214   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:20.762913   70908 cri.go:89] found id: ""
	I0311 21:37:20.762935   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.762943   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:20.762952   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:20.762997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:20.811120   70908 cri.go:89] found id: ""
	I0311 21:37:20.811147   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.811158   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:20.811165   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:20.811225   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:20.848987   70908 cri.go:89] found id: ""
	I0311 21:37:20.849015   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.849026   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:20.849033   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:20.849098   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:20.896201   70908 cri.go:89] found id: ""
	I0311 21:37:20.896226   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.896233   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:20.896240   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:20.896299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:20.936570   70908 cri.go:89] found id: ""
	I0311 21:37:20.936595   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.936603   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:20.936608   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:20.936657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:20.977535   70908 cri.go:89] found id: ""
	I0311 21:37:20.977565   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.977576   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:20.977584   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:20.977647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:21.015370   70908 cri.go:89] found id: ""
	I0311 21:37:21.015395   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.015405   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:21.015413   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:21.015472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:21.146868   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:23.147272   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:19.950972   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.450298   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.528109   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.530216   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:21.056190   70908 cri.go:89] found id: ""
	I0311 21:37:21.056214   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.056225   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:21.056235   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:21.056255   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:21.112022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:21.112051   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:21.128841   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:21.128872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:21.209690   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:21.209716   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:21.209732   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:21.291064   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:21.291099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:23.844334   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:23.860000   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:23.860061   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:23.899777   70908 cri.go:89] found id: ""
	I0311 21:37:23.899805   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.899814   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:23.899820   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:23.899879   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:23.941510   70908 cri.go:89] found id: ""
	I0311 21:37:23.941537   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.941547   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:23.941555   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:23.941627   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:23.980564   70908 cri.go:89] found id: ""
	I0311 21:37:23.980592   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.980602   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:23.980614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:23.980676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:24.020310   70908 cri.go:89] found id: ""
	I0311 21:37:24.020337   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.020348   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:24.020354   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:24.020410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:24.059320   70908 cri.go:89] found id: ""
	I0311 21:37:24.059349   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.059359   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:24.059367   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:24.059424   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:24.096625   70908 cri.go:89] found id: ""
	I0311 21:37:24.096652   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.096660   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:24.096666   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:24.096723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:24.137068   70908 cri.go:89] found id: ""
	I0311 21:37:24.137100   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.137112   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:24.137121   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:24.137182   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:24.181298   70908 cri.go:89] found id: ""
	I0311 21:37:24.181325   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.181336   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:24.181348   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:24.181364   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:24.265423   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:24.265454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:24.318088   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:24.318113   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:24.374402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:24.374430   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:24.388934   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:24.388962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:24.475842   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:25.647164   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.650157   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.948984   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.949444   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:28.950697   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.030240   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:29.030848   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.976017   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:26.991533   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:26.991602   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:27.034750   70908 cri.go:89] found id: ""
	I0311 21:37:27.034769   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.034776   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:27.034781   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:27.034837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:27.073275   70908 cri.go:89] found id: ""
	I0311 21:37:27.073301   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.073309   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:27.073317   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:27.073363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:27.113396   70908 cri.go:89] found id: ""
	I0311 21:37:27.113418   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.113425   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:27.113431   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:27.113482   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:27.157442   70908 cri.go:89] found id: ""
	I0311 21:37:27.157465   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.157475   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:27.157482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:27.157534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:27.197277   70908 cri.go:89] found id: ""
	I0311 21:37:27.197302   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.197309   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:27.197315   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:27.197363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:27.237967   70908 cri.go:89] found id: ""
	I0311 21:37:27.237991   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.237999   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:27.238005   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:27.238077   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:27.280434   70908 cri.go:89] found id: ""
	I0311 21:37:27.280459   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.280467   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:27.280472   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:27.280535   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:27.334940   70908 cri.go:89] found id: ""
	I0311 21:37:27.334970   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.334982   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:27.334992   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:27.335010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:27.402535   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:27.402570   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:27.416758   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:27.416787   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:27.492762   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:27.492786   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:27.492803   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:27.576989   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:27.577032   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.124039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:30.138419   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:30.138483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:30.180900   70908 cri.go:89] found id: ""
	I0311 21:37:30.180926   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.180936   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:30.180944   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:30.180998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:30.222886   70908 cri.go:89] found id: ""
	I0311 21:37:30.222913   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.222921   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:30.222926   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:30.222976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:30.264332   70908 cri.go:89] found id: ""
	I0311 21:37:30.264357   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.264367   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:30.264376   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:30.264436   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:30.307084   70908 cri.go:89] found id: ""
	I0311 21:37:30.307112   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.307123   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:30.307130   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:30.307188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:30.345954   70908 cri.go:89] found id: ""
	I0311 21:37:30.345979   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.345990   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:30.345997   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:30.346057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:30.389408   70908 cri.go:89] found id: ""
	I0311 21:37:30.389439   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.389450   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:30.389457   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:30.389517   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:30.438380   70908 cri.go:89] found id: ""
	I0311 21:37:30.438410   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.438420   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:30.438427   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:30.438489   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:30.479860   70908 cri.go:89] found id: ""
	I0311 21:37:30.479884   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.479895   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:30.479906   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:30.479920   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:30.535831   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:30.535857   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:30.552702   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:30.552725   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:30.633417   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:30.633439   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:30.633454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:30.723106   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:30.723143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.147993   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:32.152839   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.450942   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.949947   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.528469   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.529721   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.270654   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:33.296640   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:33.296710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:33.366053   70908 cri.go:89] found id: ""
	I0311 21:37:33.366082   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.366093   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:33.366101   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:33.366161   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:33.421455   70908 cri.go:89] found id: ""
	I0311 21:37:33.421488   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.421501   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:33.421509   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:33.421583   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:33.464555   70908 cri.go:89] found id: ""
	I0311 21:37:33.464579   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.464586   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:33.464592   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:33.464647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:33.507044   70908 cri.go:89] found id: ""
	I0311 21:37:33.507086   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.507100   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:33.507110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:33.507175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:33.561446   70908 cri.go:89] found id: ""
	I0311 21:37:33.561518   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.561532   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:33.561540   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:33.561601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:33.604496   70908 cri.go:89] found id: ""
	I0311 21:37:33.604519   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.604528   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:33.604534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:33.604591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:33.645754   70908 cri.go:89] found id: ""
	I0311 21:37:33.645781   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.645791   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:33.645797   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:33.645869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:33.690041   70908 cri.go:89] found id: ""
	I0311 21:37:33.690071   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.690082   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:33.690092   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:33.690108   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:33.765708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:33.765737   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:33.765752   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:33.848869   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:33.848906   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:33.900191   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:33.900223   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:33.957101   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:33.957138   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:34.646831   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.647640   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.449429   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.948831   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.028141   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.028588   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.028676   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.474442   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:36.490159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:36.490231   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:36.537784   70908 cri.go:89] found id: ""
	I0311 21:37:36.537812   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.537822   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:36.537829   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:36.537885   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:36.581192   70908 cri.go:89] found id: ""
	I0311 21:37:36.581219   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.581230   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:36.581237   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:36.581297   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:36.620448   70908 cri.go:89] found id: ""
	I0311 21:37:36.620480   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.620492   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:36.620501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:36.620566   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:36.662135   70908 cri.go:89] found id: ""
	I0311 21:37:36.662182   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.662193   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:36.662203   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:36.662268   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:36.708138   70908 cri.go:89] found id: ""
	I0311 21:37:36.708178   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.708188   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:36.708198   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:36.708267   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:36.749668   70908 cri.go:89] found id: ""
	I0311 21:37:36.749697   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.749708   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:36.749717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:36.749783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:36.788455   70908 cri.go:89] found id: ""
	I0311 21:37:36.788476   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.788483   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:36.788488   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:36.788534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:36.830216   70908 cri.go:89] found id: ""
	I0311 21:37:36.830244   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.830257   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:36.830267   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:36.830285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:36.915306   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:36.915336   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:36.958861   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:36.958892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:37.014463   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:37.014489   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:37.029979   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:37.030010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:37.106840   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:39.607929   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:39.626247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:39.626307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:39.667409   70908 cri.go:89] found id: ""
	I0311 21:37:39.667436   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.667446   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:39.667454   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:39.667509   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:39.714167   70908 cri.go:89] found id: ""
	I0311 21:37:39.714198   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.714210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:39.714217   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:39.714275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:39.754759   70908 cri.go:89] found id: ""
	I0311 21:37:39.754787   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.754798   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:39.754805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:39.754865   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:39.794999   70908 cri.go:89] found id: ""
	I0311 21:37:39.795028   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.795038   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:39.795045   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:39.795108   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:39.836284   70908 cri.go:89] found id: ""
	I0311 21:37:39.836310   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.836321   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:39.836328   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:39.836386   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:39.876487   70908 cri.go:89] found id: ""
	I0311 21:37:39.876518   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.876530   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:39.876539   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:39.876601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:39.918750   70908 cri.go:89] found id: ""
	I0311 21:37:39.918785   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.918796   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:39.918813   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:39.918871   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:39.958486   70908 cri.go:89] found id: ""
	I0311 21:37:39.958517   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.958529   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:39.958537   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:39.958550   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:39.973899   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:39.973925   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:40.055954   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:40.055980   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:40.055995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:40.144801   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:40.144826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:40.189692   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:40.189722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:39.148581   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:41.647869   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:43.648550   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.949502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.951277   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.528844   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:44.529317   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.748909   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:42.763794   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:42.763877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:42.801470   70908 cri.go:89] found id: ""
	I0311 21:37:42.801493   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.801500   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:42.801506   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:42.801561   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:42.846267   70908 cri.go:89] found id: ""
	I0311 21:37:42.846294   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.846301   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:42.846307   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:42.846357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:42.890257   70908 cri.go:89] found id: ""
	I0311 21:37:42.890283   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.890294   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:42.890301   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:42.890357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:42.933605   70908 cri.go:89] found id: ""
	I0311 21:37:42.933628   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.933636   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:42.933643   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:42.933699   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:42.979020   70908 cri.go:89] found id: ""
	I0311 21:37:42.979043   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.979052   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:42.979059   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:42.979122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:43.021695   70908 cri.go:89] found id: ""
	I0311 21:37:43.021724   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.021734   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:43.021741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:43.021801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:43.064356   70908 cri.go:89] found id: ""
	I0311 21:37:43.064398   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.064406   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:43.064412   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:43.064457   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:43.101878   70908 cri.go:89] found id: ""
	I0311 21:37:43.101901   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.101909   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:43.101917   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:43.101930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:43.185836   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:43.185861   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:43.185874   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:43.268879   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:43.268912   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:43.319582   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:43.319614   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:43.374996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:43.375022   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:45.890408   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:45.905973   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:45.906041   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:45.951994   70908 cri.go:89] found id: ""
	I0311 21:37:45.952025   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.952040   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:45.952049   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:45.952112   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:45.992913   70908 cri.go:89] found id: ""
	I0311 21:37:45.992953   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.992964   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:45.992971   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:45.993034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:46.036306   70908 cri.go:89] found id: ""
	I0311 21:37:46.036334   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.036345   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:46.036353   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:46.036410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:46.147754   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:48.647534   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:45.450180   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:47.949568   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.532244   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.028905   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.077532   70908 cri.go:89] found id: ""
	I0311 21:37:46.077564   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.077576   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:46.077583   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:46.077633   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:46.115953   70908 cri.go:89] found id: ""
	I0311 21:37:46.115976   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.115983   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:46.115990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:46.116072   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:46.155665   70908 cri.go:89] found id: ""
	I0311 21:37:46.155699   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.155709   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:46.155717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:46.155775   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:46.197650   70908 cri.go:89] found id: ""
	I0311 21:37:46.197677   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.197696   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:46.197705   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:46.197766   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:46.243006   70908 cri.go:89] found id: ""
	I0311 21:37:46.243030   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.243037   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:46.243045   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:46.243058   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:46.294668   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:46.294696   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:46.308700   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:46.308721   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:46.387188   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:46.387207   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:46.387219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:46.480390   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:46.480423   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.027202   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:49.042292   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:49.042361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:49.081547   70908 cri.go:89] found id: ""
	I0311 21:37:49.081568   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.081579   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:49.081585   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:49.081632   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:49.127438   70908 cri.go:89] found id: ""
	I0311 21:37:49.127467   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.127477   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:49.127485   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:49.127545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:49.173992   70908 cri.go:89] found id: ""
	I0311 21:37:49.174024   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.174033   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:49.174042   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:49.174114   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:49.217087   70908 cri.go:89] found id: ""
	I0311 21:37:49.217120   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.217130   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:49.217138   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:49.217198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:49.255929   70908 cri.go:89] found id: ""
	I0311 21:37:49.255955   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.255970   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:49.255978   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:49.256037   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:49.296373   70908 cri.go:89] found id: ""
	I0311 21:37:49.296399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.296409   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:49.296417   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:49.296474   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:49.335063   70908 cri.go:89] found id: ""
	I0311 21:37:49.335092   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.335103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:49.335110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:49.335176   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:49.378374   70908 cri.go:89] found id: ""
	I0311 21:37:49.378399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.378406   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:49.378414   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:49.378427   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.422193   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:49.422220   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:49.474861   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:49.474893   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:49.490193   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:49.490219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:49.571857   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:49.571880   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:49.571895   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:51.149814   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.648033   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.949603   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.949943   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.951963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.531753   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:54.028723   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:52.168934   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:52.183086   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:52.183154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:52.221632   70908 cri.go:89] found id: ""
	I0311 21:37:52.221664   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.221675   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:52.221682   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:52.221743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:52.261550   70908 cri.go:89] found id: ""
	I0311 21:37:52.261575   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.261582   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:52.261588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:52.261638   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:52.302879   70908 cri.go:89] found id: ""
	I0311 21:37:52.302910   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.302920   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:52.302927   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:52.302987   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:52.346462   70908 cri.go:89] found id: ""
	I0311 21:37:52.346485   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.346494   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:52.346499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:52.346551   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:52.387949   70908 cri.go:89] found id: ""
	I0311 21:37:52.387977   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.387988   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:52.387995   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:52.388052   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:52.428527   70908 cri.go:89] found id: ""
	I0311 21:37:52.428564   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.428574   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:52.428582   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:52.428649   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:52.469516   70908 cri.go:89] found id: ""
	I0311 21:37:52.469548   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.469558   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:52.469565   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:52.469616   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:52.508371   70908 cri.go:89] found id: ""
	I0311 21:37:52.508407   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.508417   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:52.508429   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:52.508444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:52.587309   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:52.587346   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:52.587361   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:52.666419   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:52.666449   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:52.713150   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:52.713184   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:52.768011   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:52.768041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.284835   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:55.298742   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:55.298799   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:55.340215   70908 cri.go:89] found id: ""
	I0311 21:37:55.340240   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.340251   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:55.340257   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:55.340321   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:55.377930   70908 cri.go:89] found id: ""
	I0311 21:37:55.377956   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.377967   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:55.377974   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:55.378039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:55.418786   70908 cri.go:89] found id: ""
	I0311 21:37:55.418814   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.418822   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:55.418827   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:55.418883   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:55.461566   70908 cri.go:89] found id: ""
	I0311 21:37:55.461586   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.461593   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:55.461601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:55.461655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:55.502917   70908 cri.go:89] found id: ""
	I0311 21:37:55.502945   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.502955   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:55.502962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:55.503022   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:55.551417   70908 cri.go:89] found id: ""
	I0311 21:37:55.551441   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.551454   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:55.551462   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:55.551514   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:55.596060   70908 cri.go:89] found id: ""
	I0311 21:37:55.596092   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.596103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:55.596111   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:55.596172   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:55.635495   70908 cri.go:89] found id: ""
	I0311 21:37:55.635523   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.635535   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:55.635547   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:55.635564   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:55.691705   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:55.691735   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.707696   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:55.707718   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:55.780432   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:55.780452   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:55.780465   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:55.866033   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:55.866067   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:55.648873   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.452135   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.951150   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.528533   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.529769   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.437299   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:58.453058   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:58.453125   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:58.493317   70908 cri.go:89] found id: ""
	I0311 21:37:58.493339   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.493347   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:58.493353   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:58.493408   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:58.543533   70908 cri.go:89] found id: ""
	I0311 21:37:58.543556   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.543567   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:58.543578   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:58.543634   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:58.585255   70908 cri.go:89] found id: ""
	I0311 21:37:58.585282   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.585292   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:58.585300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:58.585359   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:58.622393   70908 cri.go:89] found id: ""
	I0311 21:37:58.622421   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.622428   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:58.622434   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:58.622501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:58.661939   70908 cri.go:89] found id: ""
	I0311 21:37:58.661963   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.661971   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:58.661977   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:58.662034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:58.703628   70908 cri.go:89] found id: ""
	I0311 21:37:58.703663   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.703674   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:58.703682   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:58.703743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:58.742553   70908 cri.go:89] found id: ""
	I0311 21:37:58.742583   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.742594   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:58.742601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:58.742662   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:58.785016   70908 cri.go:89] found id: ""
	I0311 21:37:58.785040   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.785047   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:58.785055   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:58.785071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:58.857757   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:58.857773   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:58.857786   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:58.946120   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:58.946148   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:58.996288   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:58.996328   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:59.055371   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:59.055407   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:00.651621   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.149663   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:00.951776   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.451012   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.028303   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.028600   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.032276   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.571092   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:01.591149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:01.591238   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:01.629156   70908 cri.go:89] found id: ""
	I0311 21:38:01.629184   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.629196   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:01.629203   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:01.629261   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:01.673656   70908 cri.go:89] found id: ""
	I0311 21:38:01.673680   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.673687   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:01.673692   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:01.673739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:01.713361   70908 cri.go:89] found id: ""
	I0311 21:38:01.713389   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.713397   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:01.713403   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:01.713450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:01.757256   70908 cri.go:89] found id: ""
	I0311 21:38:01.757286   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.757298   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:01.757305   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:01.757362   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:01.797538   70908 cri.go:89] found id: ""
	I0311 21:38:01.797565   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.797573   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:01.797580   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:01.797635   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:01.838664   70908 cri.go:89] found id: ""
	I0311 21:38:01.838692   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.838701   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:01.838707   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:01.838754   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:01.893638   70908 cri.go:89] found id: ""
	I0311 21:38:01.893668   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.893679   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:01.893686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:01.893747   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:01.935547   70908 cri.go:89] found id: ""
	I0311 21:38:01.935569   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.935577   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:01.935585   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:01.935596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:01.989964   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:01.989988   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:02.004949   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:02.004973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:02.082006   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:02.082024   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:02.082041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:02.171040   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:02.171072   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:04.724699   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:04.741445   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:04.741512   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:04.783924   70908 cri.go:89] found id: ""
	I0311 21:38:04.783951   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.783962   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:04.783969   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:04.784028   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:04.825806   70908 cri.go:89] found id: ""
	I0311 21:38:04.825835   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.825845   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:04.825852   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:04.825913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:04.864070   70908 cri.go:89] found id: ""
	I0311 21:38:04.864106   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.864118   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:04.864126   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:04.864181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:04.901735   70908 cri.go:89] found id: ""
	I0311 21:38:04.901759   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.901769   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:04.901777   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:04.901832   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:04.941473   70908 cri.go:89] found id: ""
	I0311 21:38:04.941496   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.941505   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:04.941513   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:04.941569   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:04.993132   70908 cri.go:89] found id: ""
	I0311 21:38:04.993162   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.993170   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:04.993178   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:04.993237   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:05.037925   70908 cri.go:89] found id: ""
	I0311 21:38:05.037950   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.037960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:05.037967   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:05.038026   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:05.080726   70908 cri.go:89] found id: ""
	I0311 21:38:05.080773   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.080784   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:05.080794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:05.080806   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:05.138205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:05.138233   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:05.155048   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:05.155071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:05.233067   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:05.233086   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:05.233099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:05.317897   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:05.317928   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:05.646661   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.647686   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.949900   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.950261   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.528049   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:09.530724   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.863484   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:07.877342   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:07.877411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:07.916352   70908 cri.go:89] found id: ""
	I0311 21:38:07.916374   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.916383   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:07.916391   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:07.916454   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:07.954833   70908 cri.go:89] found id: ""
	I0311 21:38:07.954854   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.954863   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:07.954870   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:07.954926   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:07.993124   70908 cri.go:89] found id: ""
	I0311 21:38:07.993152   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.993161   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:07.993168   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:07.993232   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:08.039081   70908 cri.go:89] found id: ""
	I0311 21:38:08.039108   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.039118   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:08.039125   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:08.039191   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:08.084627   70908 cri.go:89] found id: ""
	I0311 21:38:08.084650   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.084658   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:08.084665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:08.084712   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:08.125986   70908 cri.go:89] found id: ""
	I0311 21:38:08.126015   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.126026   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:08.126034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:08.126080   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:08.167149   70908 cri.go:89] found id: ""
	I0311 21:38:08.167176   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.167188   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:08.167193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:08.167252   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:08.204988   70908 cri.go:89] found id: ""
	I0311 21:38:08.205012   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.205020   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:08.205028   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:08.205043   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:08.295226   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:08.295268   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:08.357789   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:08.357820   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:08.434091   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:08.434132   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:08.455208   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:08.455240   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:08.529620   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.030060   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:09.648047   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.649628   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:13.652370   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:10.450139   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:12.949551   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.531354   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.029703   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.044303   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:11.046353   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:11.088067   70908 cri.go:89] found id: ""
	I0311 21:38:11.088099   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.088110   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:11.088117   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:11.088177   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:11.131077   70908 cri.go:89] found id: ""
	I0311 21:38:11.131104   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.131114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:11.131121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:11.131181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:11.172409   70908 cri.go:89] found id: ""
	I0311 21:38:11.172431   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.172439   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:11.172444   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:11.172496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:11.216775   70908 cri.go:89] found id: ""
	I0311 21:38:11.216817   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.216825   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:11.216830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:11.216886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:11.255105   70908 cri.go:89] found id: ""
	I0311 21:38:11.255129   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.255137   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:11.255142   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:11.255205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:11.292397   70908 cri.go:89] found id: ""
	I0311 21:38:11.292429   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.292440   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:11.292448   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:11.292518   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:11.330376   70908 cri.go:89] found id: ""
	I0311 21:38:11.330397   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.330408   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:11.330415   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:11.330476   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:11.367699   70908 cri.go:89] found id: ""
	I0311 21:38:11.367727   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.367737   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:11.367748   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:11.367763   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:11.421847   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:11.421876   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:11.437570   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:11.437593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:11.522084   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.522108   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:11.522123   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:11.606181   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:11.606228   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.153952   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:14.175726   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:14.175798   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:14.221752   70908 cri.go:89] found id: ""
	I0311 21:38:14.221784   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.221798   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:14.221807   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:14.221895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:14.286690   70908 cri.go:89] found id: ""
	I0311 21:38:14.286720   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.286740   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:14.286757   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:14.286824   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:14.343764   70908 cri.go:89] found id: ""
	I0311 21:38:14.343790   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.343799   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:14.343806   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:14.343876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:14.381198   70908 cri.go:89] found id: ""
	I0311 21:38:14.381220   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.381230   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:14.381237   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:14.381307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:14.421578   70908 cri.go:89] found id: ""
	I0311 21:38:14.421603   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.421613   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:14.421620   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:14.421678   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:14.462945   70908 cri.go:89] found id: ""
	I0311 21:38:14.462972   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.462982   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:14.462990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:14.463049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:14.503503   70908 cri.go:89] found id: ""
	I0311 21:38:14.503532   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.503543   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:14.503550   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:14.503610   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:14.543987   70908 cri.go:89] found id: ""
	I0311 21:38:14.544021   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.544034   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:14.544045   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:14.544062   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:14.624781   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:14.624804   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:14.624821   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:14.707130   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:14.707161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.750815   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:14.750848   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:14.806855   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:14.806882   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:16.149516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.646716   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.949827   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.953660   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.031935   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.529085   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:17.325267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:17.340421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:17.340483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:17.382808   70908 cri.go:89] found id: ""
	I0311 21:38:17.382831   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.382841   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:17.382849   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:17.382906   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:17.424838   70908 cri.go:89] found id: ""
	I0311 21:38:17.424865   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.424875   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:17.424883   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:17.424940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:17.466298   70908 cri.go:89] found id: ""
	I0311 21:38:17.466320   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.466327   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:17.466333   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:17.466397   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:17.506648   70908 cri.go:89] found id: ""
	I0311 21:38:17.506678   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.506685   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:17.506691   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:17.506739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:17.544019   70908 cri.go:89] found id: ""
	I0311 21:38:17.544048   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.544057   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:17.544067   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:17.544154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:17.583691   70908 cri.go:89] found id: ""
	I0311 21:38:17.583710   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.583717   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:17.583723   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:17.583768   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:17.624432   70908 cri.go:89] found id: ""
	I0311 21:38:17.624453   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.624460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:17.624466   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:17.624516   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:17.663253   70908 cri.go:89] found id: ""
	I0311 21:38:17.663294   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.663312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:17.663322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:17.663339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:17.749928   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:17.749962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:17.792817   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:17.792853   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:17.847391   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:17.847419   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:17.862813   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:17.862835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:17.935307   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.435995   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:20.452441   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:20.452510   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:20.491960   70908 cri.go:89] found id: ""
	I0311 21:38:20.491985   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.491992   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:20.491998   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:20.492045   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:20.531679   70908 cri.go:89] found id: ""
	I0311 21:38:20.531700   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.531707   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:20.531712   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:20.531764   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:20.571666   70908 cri.go:89] found id: ""
	I0311 21:38:20.571687   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.571694   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:20.571699   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:20.571762   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:20.611165   70908 cri.go:89] found id: ""
	I0311 21:38:20.611187   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.611194   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:20.611199   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:20.611248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:20.648680   70908 cri.go:89] found id: ""
	I0311 21:38:20.648709   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.648720   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:20.648728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:20.648801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:20.690177   70908 cri.go:89] found id: ""
	I0311 21:38:20.690204   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.690215   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:20.690222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:20.690298   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:20.728918   70908 cri.go:89] found id: ""
	I0311 21:38:20.728949   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.728960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:20.728968   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:20.729039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:20.773559   70908 cri.go:89] found id: ""
	I0311 21:38:20.773586   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.773596   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:20.773607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:20.773623   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:20.788709   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:20.788750   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:20.869832   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.869856   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:20.869868   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:20.963515   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:20.963544   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:21.007029   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:21.007055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:21.147703   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.660410   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:19.449416   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:21.451194   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.950401   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:20.529497   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:22.529947   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:25.030431   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.566134   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:23.583855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:23.583911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:23.623605   70908 cri.go:89] found id: ""
	I0311 21:38:23.623633   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.623656   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:23.623664   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:23.623719   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:23.663058   70908 cri.go:89] found id: ""
	I0311 21:38:23.663081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.663091   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:23.663098   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:23.663157   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:23.701930   70908 cri.go:89] found id: ""
	I0311 21:38:23.701963   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.701975   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:23.701985   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:23.702049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:23.743925   70908 cri.go:89] found id: ""
	I0311 21:38:23.743955   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.743964   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:23.743970   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:23.744046   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:23.784030   70908 cri.go:89] found id: ""
	I0311 21:38:23.784055   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.784066   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:23.784073   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:23.784132   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:23.823054   70908 cri.go:89] found id: ""
	I0311 21:38:23.823081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.823089   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:23.823097   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:23.823156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:23.863629   70908 cri.go:89] found id: ""
	I0311 21:38:23.863654   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.863662   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:23.863668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:23.863724   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:23.904429   70908 cri.go:89] found id: ""
	I0311 21:38:23.904454   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.904462   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:23.904470   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:23.904481   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:23.962356   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:23.962393   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:23.977667   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:23.977689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:24.068791   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:24.068820   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:24.068835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:24.157857   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:24.157892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:26.147447   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.148069   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.450243   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.950495   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:27.530194   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:30.029286   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.705872   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:26.720840   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:26.720936   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:26.766449   70908 cri.go:89] found id: ""
	I0311 21:38:26.766480   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.766490   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:26.766496   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:26.766557   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:26.806179   70908 cri.go:89] found id: ""
	I0311 21:38:26.806203   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.806210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:26.806216   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:26.806275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:26.850737   70908 cri.go:89] found id: ""
	I0311 21:38:26.850765   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.850775   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:26.850785   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:26.850845   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:26.897694   70908 cri.go:89] found id: ""
	I0311 21:38:26.897722   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.897733   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:26.897744   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:26.897802   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:26.940940   70908 cri.go:89] found id: ""
	I0311 21:38:26.940962   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.940969   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:26.940975   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:26.941021   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:26.978576   70908 cri.go:89] found id: ""
	I0311 21:38:26.978604   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.978614   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:26.978625   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:26.978682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:27.016331   70908 cri.go:89] found id: ""
	I0311 21:38:27.016363   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.016374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:27.016381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:27.016439   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:27.061541   70908 cri.go:89] found id: ""
	I0311 21:38:27.061569   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.061580   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:27.061590   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:27.061609   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:27.154977   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:27.155017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:27.204458   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:27.204488   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:27.259960   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:27.259997   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:27.277806   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:27.277832   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:27.356111   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:29.856828   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:29.871331   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:29.871413   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:29.912867   70908 cri.go:89] found id: ""
	I0311 21:38:29.912895   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.912904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:29.912910   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:29.912973   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:29.953458   70908 cri.go:89] found id: ""
	I0311 21:38:29.953483   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.953491   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:29.953497   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:29.953553   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:29.997873   70908 cri.go:89] found id: ""
	I0311 21:38:29.997904   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.997912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:29.997921   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:29.997983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:30.038831   70908 cri.go:89] found id: ""
	I0311 21:38:30.038861   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.038872   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:30.038880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:30.038940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:30.082089   70908 cri.go:89] found id: ""
	I0311 21:38:30.082117   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.082127   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:30.082135   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:30.082213   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:30.121167   70908 cri.go:89] found id: ""
	I0311 21:38:30.121198   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.121209   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:30.121216   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:30.121274   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:30.162342   70908 cri.go:89] found id: ""
	I0311 21:38:30.162371   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.162380   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:30.162393   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:30.162452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:30.201727   70908 cri.go:89] found id: ""
	I0311 21:38:30.201753   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.201761   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:30.201769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:30.201780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:30.283314   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:30.283346   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:30.333900   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:30.333930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:30.391761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:30.391798   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:30.407907   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:30.407930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:30.489560   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:30.646773   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.649048   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:31.456251   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:33.951315   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.529160   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:34.530183   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.989976   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:33.004724   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:33.004814   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:33.049701   70908 cri.go:89] found id: ""
	I0311 21:38:33.049733   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.049743   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:33.049753   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:33.049823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:33.097759   70908 cri.go:89] found id: ""
	I0311 21:38:33.097792   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.097804   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:33.097811   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:33.097875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:33.143257   70908 cri.go:89] found id: ""
	I0311 21:38:33.143291   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.143300   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:33.143308   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:33.143376   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:33.187434   70908 cri.go:89] found id: ""
	I0311 21:38:33.187464   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.187477   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:33.187483   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:33.187558   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:33.236201   70908 cri.go:89] found id: ""
	I0311 21:38:33.236230   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.236239   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:33.236245   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:33.236312   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:33.279710   70908 cri.go:89] found id: ""
	I0311 21:38:33.279783   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.279816   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:33.279830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:33.279898   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:33.325022   70908 cri.go:89] found id: ""
	I0311 21:38:33.325053   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.325064   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:33.325072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:33.325138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:33.368588   70908 cri.go:89] found id: ""
	I0311 21:38:33.368614   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.368622   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:33.368629   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:33.368640   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:33.427761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:33.427801   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:33.444440   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:33.444472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:33.527745   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:33.527764   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:33.527775   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:33.608215   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:33.608248   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:35.146541   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:37.146917   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.450175   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:38.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.531125   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:39.028780   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.158253   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:36.172370   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:36.172438   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:36.216905   70908 cri.go:89] found id: ""
	I0311 21:38:36.216935   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.216945   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:36.216951   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:36.216996   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:36.260844   70908 cri.go:89] found id: ""
	I0311 21:38:36.260875   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.260885   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:36.260890   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:36.260941   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:36.306730   70908 cri.go:89] found id: ""
	I0311 21:38:36.306755   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.306767   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:36.306772   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:36.306820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:36.346957   70908 cri.go:89] found id: ""
	I0311 21:38:36.346993   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.347004   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:36.347012   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:36.347082   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:36.392265   70908 cri.go:89] found id: ""
	I0311 21:38:36.392295   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.392306   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:36.392313   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:36.392379   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:36.433383   70908 cri.go:89] found id: ""
	I0311 21:38:36.433407   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.433414   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:36.433421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:36.433467   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:36.471291   70908 cri.go:89] found id: ""
	I0311 21:38:36.471325   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.471336   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:36.471344   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:36.471411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:36.514662   70908 cri.go:89] found id: ""
	I0311 21:38:36.514688   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.514698   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:36.514708   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:36.514722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:36.533222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:36.533251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:36.616359   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:36.616384   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:36.616400   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:36.719105   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:36.719137   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:36.771125   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:36.771156   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.324847   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:39.341149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:39.341218   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:39.380284   70908 cri.go:89] found id: ""
	I0311 21:38:39.380324   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.380335   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:39.380343   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:39.380407   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:39.429860   70908 cri.go:89] found id: ""
	I0311 21:38:39.429886   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.429894   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:39.429899   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:39.429960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:39.468089   70908 cri.go:89] found id: ""
	I0311 21:38:39.468113   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.468121   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:39.468127   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:39.468188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:39.508589   70908 cri.go:89] found id: ""
	I0311 21:38:39.508617   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.508628   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:39.508636   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:39.508695   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:39.552427   70908 cri.go:89] found id: ""
	I0311 21:38:39.552451   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.552459   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:39.552464   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:39.552511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:39.592586   70908 cri.go:89] found id: ""
	I0311 21:38:39.592607   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.592615   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:39.592621   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:39.592670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:39.637138   70908 cri.go:89] found id: ""
	I0311 21:38:39.637167   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.637178   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:39.637186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:39.637248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:39.679422   70908 cri.go:89] found id: ""
	I0311 21:38:39.679457   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.679470   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:39.679482   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:39.679499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.734815   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:39.734850   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:39.750448   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:39.750472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:39.832912   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:39.832936   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:39.832951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:39.924020   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:39.924061   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:39.648759   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.146226   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:40.950021   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.951344   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:41.528407   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529130   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529166   70458 pod_ready.go:81] duration metric: took 4m0.007627735s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:43.529179   70458 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0311 21:38:43.529188   70458 pod_ready.go:38] duration metric: took 4m4.551429192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:43.529207   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:38:43.529242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:43.529306   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:43.589292   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:43.589314   70458 cri.go:89] found id: ""
	I0311 21:38:43.589323   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:43.589388   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.595182   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:43.595267   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:43.645002   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:43.645027   70458 cri.go:89] found id: ""
	I0311 21:38:43.645036   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:43.645088   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.650463   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:43.650537   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:43.693876   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:43.693894   70458 cri.go:89] found id: ""
	I0311 21:38:43.693902   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:43.693958   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.699273   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:43.699340   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:43.752552   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:43.752585   70458 cri.go:89] found id: ""
	I0311 21:38:43.752596   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:43.752667   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.758307   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:43.758384   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:43.802761   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:43.802789   70458 cri.go:89] found id: ""
	I0311 21:38:43.802798   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:43.802858   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.807796   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:43.807867   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:43.853820   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:43.853843   70458 cri.go:89] found id: ""
	I0311 21:38:43.853851   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:43.853907   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.859377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:43.859451   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:43.910605   70458 cri.go:89] found id: ""
	I0311 21:38:43.910640   70458 logs.go:276] 0 containers: []
	W0311 21:38:43.910648   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:43.910655   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:43.910702   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:43.955602   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:43.955624   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:43.955629   70458 cri.go:89] found id: ""
	I0311 21:38:43.955645   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:43.955713   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.960856   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.965889   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:43.965919   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:44.013879   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:44.013908   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:44.064641   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:44.064669   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:44.118095   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:44.118120   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:44.177775   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:44.177819   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.242090   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:44.242129   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:44.261628   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:44.261665   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:44.322616   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:44.322656   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:44.388117   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:44.388159   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:44.445980   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:44.446018   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:44.980199   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:44.980243   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:45.138312   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:45.138368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:45.208626   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:45.208664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:42.472932   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:42.488034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:42.488090   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:42.530945   70908 cri.go:89] found id: ""
	I0311 21:38:42.530971   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.530981   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:42.530989   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:42.531053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:42.571906   70908 cri.go:89] found id: ""
	I0311 21:38:42.571939   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.571951   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:42.571960   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:42.572029   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:42.613198   70908 cri.go:89] found id: ""
	I0311 21:38:42.613228   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.613239   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:42.613247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:42.613330   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:42.654740   70908 cri.go:89] found id: ""
	I0311 21:38:42.654762   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.654770   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:42.654775   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:42.654821   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:42.694797   70908 cri.go:89] found id: ""
	I0311 21:38:42.694836   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.694847   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:42.694854   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:42.694931   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:42.738918   70908 cri.go:89] found id: ""
	I0311 21:38:42.738946   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.738958   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:42.738965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:42.739032   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:42.780836   70908 cri.go:89] found id: ""
	I0311 21:38:42.780870   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.780881   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:42.780888   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:42.780943   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:42.824672   70908 cri.go:89] found id: ""
	I0311 21:38:42.824701   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.824712   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:42.824721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:42.824747   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:42.877219   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:42.877253   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:42.934996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:42.935033   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:42.952125   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:42.952152   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:43.036657   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:43.036678   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:43.036695   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:45.629959   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:45.648501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:45.648581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:45.690083   70908 cri.go:89] found id: ""
	I0311 21:38:45.690117   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.690128   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:45.690136   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:45.690201   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:45.736497   70908 cri.go:89] found id: ""
	I0311 21:38:45.736519   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.736526   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:45.736531   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:45.736576   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:45.778590   70908 cri.go:89] found id: ""
	I0311 21:38:45.778625   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.778636   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:45.778645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:45.778723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:45.822322   70908 cri.go:89] found id: ""
	I0311 21:38:45.822351   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.822359   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:45.822365   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:45.822419   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:45.868591   70908 cri.go:89] found id: ""
	I0311 21:38:45.868618   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.868627   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:45.868633   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:45.868680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:45.915137   70908 cri.go:89] found id: ""
	I0311 21:38:45.915165   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.915178   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:45.915187   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:45.915258   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:45.960432   70908 cri.go:89] found id: ""
	I0311 21:38:45.960459   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.960469   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:45.960476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:45.960529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:46.006089   70908 cri.go:89] found id: ""
	I0311 21:38:46.006168   70908 logs.go:276] 0 containers: []
	W0311 21:38:46.006185   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:46.006195   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:46.006209   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.153091   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.650654   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:44.951550   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.952791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:47.756629   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:47.776613   70458 api_server.go:72] duration metric: took 4m14.182101385s to wait for apiserver process to appear ...
	I0311 21:38:47.776651   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:38:47.776691   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:47.776774   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:47.826534   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:47.826553   70458 cri.go:89] found id: ""
	I0311 21:38:47.826560   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:47.826609   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.831565   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:47.831637   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:47.876504   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:47.876531   70458 cri.go:89] found id: ""
	I0311 21:38:47.876541   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:47.876598   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.882130   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:47.882224   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:47.930064   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:47.930087   70458 cri.go:89] found id: ""
	I0311 21:38:47.930096   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:47.930139   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.935357   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:47.935433   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:47.989169   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:47.989196   70458 cri.go:89] found id: ""
	I0311 21:38:47.989206   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:47.989262   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.994341   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:47.994401   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:48.037592   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:48.037619   70458 cri.go:89] found id: ""
	I0311 21:38:48.037629   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:48.037692   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.043377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:48.043453   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:48.088629   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.088651   70458 cri.go:89] found id: ""
	I0311 21:38:48.088671   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:48.088722   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.093944   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:48.094016   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:48.144943   70458 cri.go:89] found id: ""
	I0311 21:38:48.144971   70458 logs.go:276] 0 containers: []
	W0311 21:38:48.144983   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:48.144990   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:48.145050   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:48.188857   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:48.188877   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.188881   70458 cri.go:89] found id: ""
	I0311 21:38:48.188887   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:48.188934   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.195123   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.200643   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:48.200673   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.246864   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:48.246894   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:48.715510   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:48.715545   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:48.775676   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:48.775716   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:48.793121   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:48.793157   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:48.863992   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:48.864040   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:48.922775   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:48.922810   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.996820   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:48.996866   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:49.045065   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.045097   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:49.199072   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:49.199137   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:49.283329   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:49.283360   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:49.340461   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:49.340502   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:49.391436   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.391460   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:46.064257   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:46.064296   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:46.080304   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:46.080337   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:46.177978   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:46.178001   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:46.178017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:46.265260   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:46.265298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:48.814221   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:48.835695   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:48.835793   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:48.898391   70908 cri.go:89] found id: ""
	I0311 21:38:48.898418   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.898429   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:48.898437   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:48.898501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:48.972552   70908 cri.go:89] found id: ""
	I0311 21:38:48.972596   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.972607   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:48.972617   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:48.972684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:49.022346   70908 cri.go:89] found id: ""
	I0311 21:38:49.022371   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.022379   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:49.022384   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:49.022430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:49.078415   70908 cri.go:89] found id: ""
	I0311 21:38:49.078444   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.078455   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:49.078463   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:49.078526   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:49.119369   70908 cri.go:89] found id: ""
	I0311 21:38:49.119402   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.119412   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:49.119420   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:49.119497   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:49.169866   70908 cri.go:89] found id: ""
	I0311 21:38:49.169897   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.169908   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:49.169916   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:49.169978   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:49.223619   70908 cri.go:89] found id: ""
	I0311 21:38:49.223642   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.223650   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:49.223656   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:49.223704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:49.278499   70908 cri.go:89] found id: ""
	I0311 21:38:49.278531   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.278542   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:49.278551   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:49.278563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:49.294734   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.294760   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:49.390223   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:49.390252   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:49.390267   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:49.481214   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.481250   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:49.530285   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:49.530321   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:49.149825   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.648269   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.140832   70604 pod_ready.go:81] duration metric: took 4m0.000856291s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:53.140873   70604 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:38:53.140895   70604 pod_ready.go:38] duration metric: took 4m13.032115697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:53.140925   70604 kubeadm.go:591] duration metric: took 4m21.406945055s to restartPrimaryControlPlane
	W0311 21:38:53.140993   70604 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:53.141028   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:49.450738   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.950491   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.952209   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.955522   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:38:51.961814   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:38:51.963188   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:38:51.963209   70458 api_server.go:131] duration metric: took 4.186550701s to wait for apiserver health ...
	I0311 21:38:51.963218   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:38:51.963242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:51.963294   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.020708   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.020727   70458 cri.go:89] found id: ""
	I0311 21:38:52.020746   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:52.020815   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.026606   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.026668   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.072045   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.072063   70458 cri.go:89] found id: ""
	I0311 21:38:52.072071   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:52.072130   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.078592   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.078771   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.139445   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:52.139480   70458 cri.go:89] found id: ""
	I0311 21:38:52.139490   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:52.139548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.148641   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.148724   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.199332   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.199360   70458 cri.go:89] found id: ""
	I0311 21:38:52.199371   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:52.199433   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.207033   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.207096   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.267514   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.267540   70458 cri.go:89] found id: ""
	I0311 21:38:52.267549   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:52.267615   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.274048   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.274132   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.330293   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:52.330324   70458 cri.go:89] found id: ""
	I0311 21:38:52.330334   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:52.330395   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.336062   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.336143   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.381909   70458 cri.go:89] found id: ""
	I0311 21:38:52.381941   70458 logs.go:276] 0 containers: []
	W0311 21:38:52.381952   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.381960   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:52.382026   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:52.441879   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.441908   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.441919   70458 cri.go:89] found id: ""
	I0311 21:38:52.441928   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:52.441988   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.449288   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.456632   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.456664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.526327   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.526368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.545008   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.545035   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:52.699959   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:52.699995   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.762045   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:52.762079   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.828963   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:52.829005   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.874202   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:52.874237   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.916842   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:52.916872   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.969778   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.969807   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:53.365097   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:53.365147   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:53.446533   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:53.446576   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:53.500017   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:53.500043   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:53.572904   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:53.572954   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.087848   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:52.108284   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:52.108351   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.161648   70908 cri.go:89] found id: ""
	I0311 21:38:52.161680   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.161691   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:52.161698   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.161763   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.206552   70908 cri.go:89] found id: ""
	I0311 21:38:52.206577   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.206588   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:52.206596   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.206659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.253954   70908 cri.go:89] found id: ""
	I0311 21:38:52.253984   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.253996   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:52.254004   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.254068   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.302343   70908 cri.go:89] found id: ""
	I0311 21:38:52.302384   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.302396   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:52.302404   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.302472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.345581   70908 cri.go:89] found id: ""
	I0311 21:38:52.345608   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.345618   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:52.345624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.345683   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.392502   70908 cri.go:89] found id: ""
	I0311 21:38:52.392531   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.392542   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:52.392549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.392601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.447625   70908 cri.go:89] found id: ""
	I0311 21:38:52.447651   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.447661   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.447668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:52.447728   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:52.490965   70908 cri.go:89] found id: ""
	I0311 21:38:52.490994   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.491007   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:52.491019   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:52.491034   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:52.539604   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.539650   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.597735   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.597771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.617572   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.617610   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:52.706724   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:52.706753   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.706769   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.293550   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:55.313904   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:55.314005   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:55.368607   70908 cri.go:89] found id: ""
	I0311 21:38:55.368639   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.368647   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:55.368654   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:55.368714   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:55.434052   70908 cri.go:89] found id: ""
	I0311 21:38:55.434081   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.434092   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:55.434100   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:55.434189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:55.483532   70908 cri.go:89] found id: ""
	I0311 21:38:55.483562   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.483572   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:55.483579   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:55.483647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:55.528681   70908 cri.go:89] found id: ""
	I0311 21:38:55.528708   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.528721   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:55.528728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:55.528825   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:55.583143   70908 cri.go:89] found id: ""
	I0311 21:38:55.583167   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.583174   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:55.583179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:55.583240   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:55.636577   70908 cri.go:89] found id: ""
	I0311 21:38:55.636599   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.636607   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:55.636612   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:55.636670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:55.697268   70908 cri.go:89] found id: ""
	I0311 21:38:55.697295   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.697306   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:55.697314   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:55.697374   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:55.749272   70908 cri.go:89] found id: ""
	I0311 21:38:55.749302   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.749312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:55.749322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:55.749335   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.841581   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:55.841643   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:55.898537   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:55.898574   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:55.973278   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:55.973329   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:55.992958   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:55.992986   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:56.137313   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:38:56.137347   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.137354   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.137361   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.137366   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.137371   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.137375   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.137383   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.137390   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.137400   70458 system_pods.go:74] duration metric: took 4.174175629s to wait for pod list to return data ...
	I0311 21:38:56.137409   70458 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:38:56.140315   70458 default_sa.go:45] found service account: "default"
	I0311 21:38:56.140344   70458 default_sa.go:55] duration metric: took 2.92722ms for default service account to be created ...
	I0311 21:38:56.140356   70458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:38:56.146873   70458 system_pods.go:86] 8 kube-system pods found
	I0311 21:38:56.146912   70458 system_pods.go:89] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.146923   70458 system_pods.go:89] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.146932   70458 system_pods.go:89] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.146940   70458 system_pods.go:89] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.146945   70458 system_pods.go:89] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.146951   70458 system_pods.go:89] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.146960   70458 system_pods.go:89] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.146972   70458 system_pods.go:89] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.146983   70458 system_pods.go:126] duration metric: took 6.619737ms to wait for k8s-apps to be running ...
	I0311 21:38:56.146998   70458 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:38:56.147056   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:56.165354   70458 system_svc.go:56] duration metric: took 18.346754ms WaitForService to wait for kubelet
	I0311 21:38:56.165387   70458 kubeadm.go:576] duration metric: took 4m22.570894549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:38:56.165413   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:38:56.168819   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:38:56.168845   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:38:56.168856   70458 node_conditions.go:105] duration metric: took 3.437527ms to run NodePressure ...
	I0311 21:38:56.168868   70458 start.go:240] waiting for startup goroutines ...
	I0311 21:38:56.168875   70458 start.go:245] waiting for cluster config update ...
	I0311 21:38:56.168885   70458 start.go:254] writing updated cluster config ...
	I0311 21:38:56.169153   70458 ssh_runner.go:195] Run: rm -f paused
	I0311 21:38:56.225977   70458 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:38:56.228234   70458 out.go:177] * Done! kubectl is now configured to use "no-preload-324578" cluster and "default" namespace by default
	I0311 21:38:56.450729   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:58.450799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	W0311 21:38:56.084193   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:58.584354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:58.604767   70908 kubeadm.go:591] duration metric: took 4m4.440744932s to restartPrimaryControlPlane
	W0311 21:38:58.604844   70908 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:58.604872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:59.965834   70908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.36094005s)
	I0311 21:38:59.965906   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:59.982020   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:38:59.994794   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:00.007116   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:00.007138   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:00.007182   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:00.019744   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:00.019802   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:00.033311   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:00.045608   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:00.045685   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:00.059722   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.071140   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:00.071199   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.082635   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:00.093311   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:00.093374   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:00.104995   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:00.372164   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:00.950799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:03.450080   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:05.949899   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:07.950640   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:10.450583   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:12.949481   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:14.950496   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:16.951064   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:18.958165   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:21.450609   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:23.949791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:26.302837   70604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.161781704s)
	I0311 21:39:26.302921   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:26.319602   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:39:26.331483   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:26.343632   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:26.343658   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:26.343705   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:26.354863   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:26.354919   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:26.366087   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:26.377221   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:26.377282   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:26.389769   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.401201   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:26.401255   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.412357   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:26.423962   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:26.424035   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:26.436189   70604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:26.672030   70604 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:25.952857   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:28.449272   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:30.450630   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:32.450912   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:35.908605   70604 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:39:35.908656   70604 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:39:35.908751   70604 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:39:35.908846   70604 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:39:35.908967   70604 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:39:35.909026   70604 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:39:35.910690   70604 out.go:204]   - Generating certificates and keys ...
	I0311 21:39:35.910785   70604 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:39:35.910849   70604 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:39:35.910952   70604 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:39:35.911039   70604 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:39:35.911106   70604 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:39:35.911177   70604 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:39:35.911268   70604 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:39:35.911353   70604 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:39:35.911449   70604 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:39:35.911551   70604 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:39:35.911604   70604 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:39:35.911689   70604 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:39:35.911762   70604 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:39:35.911869   70604 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:39:35.911974   70604 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:39:35.912067   70604 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:39:35.912217   70604 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:39:35.912320   70604 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:39:35.914908   70604 out.go:204]   - Booting up control plane ...
	I0311 21:39:35.915026   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:39:35.915126   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:39:35.915216   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:39:35.915321   70604 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:39:35.915431   70604 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:39:35.915487   70604 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:39:35.915659   70604 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:39:35.915792   70604 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503325 seconds
	I0311 21:39:35.915925   70604 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:39:35.916039   70604 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:39:35.916091   70604 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:39:35.916314   70604 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-743937 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:39:35.916408   70604 kubeadm.go:309] [bootstrap-token] Using token: hxeoeg.f2scq51qa57vwzwt
	I0311 21:39:35.917880   70604 out.go:204]   - Configuring RBAC rules ...
	I0311 21:39:35.917995   70604 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:39:35.918093   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:39:35.918297   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:39:35.918490   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:39:35.918629   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:39:35.918745   70604 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:39:35.918907   70604 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:39:35.918974   70604 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:39:35.919031   70604 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:39:35.919048   70604 kubeadm.go:309] 
	I0311 21:39:35.919118   70604 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:39:35.919128   70604 kubeadm.go:309] 
	I0311 21:39:35.919225   70604 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:39:35.919236   70604 kubeadm.go:309] 
	I0311 21:39:35.919266   70604 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:39:35.919344   70604 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:39:35.919405   70604 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:39:35.919412   70604 kubeadm.go:309] 
	I0311 21:39:35.919461   70604 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:39:35.919467   70604 kubeadm.go:309] 
	I0311 21:39:35.919505   70604 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:39:35.919511   70604 kubeadm.go:309] 
	I0311 21:39:35.919553   70604 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:39:35.919640   70604 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:39:35.919727   70604 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:39:35.919736   70604 kubeadm.go:309] 
	I0311 21:39:35.919835   70604 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:39:35.919949   70604 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:39:35.919964   70604 kubeadm.go:309] 
	I0311 21:39:35.920071   70604 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920172   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:39:35.920193   70604 kubeadm.go:309] 	--control-plane 
	I0311 21:39:35.920199   70604 kubeadm.go:309] 
	I0311 21:39:35.920271   70604 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:39:35.920280   70604 kubeadm.go:309] 
	I0311 21:39:35.920349   70604 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920479   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:39:35.920507   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:39:35.920517   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:39:35.922125   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:39:35.923386   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:39:35.955828   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:39:36.065309   70604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:39:36.065389   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.065408   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-743937 minikube.k8s.io/updated_at=2024_03_11T21_39_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=embed-certs-743937 minikube.k8s.io/primary=true
	I0311 21:39:36.370945   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.370961   70604 ops.go:34] apiserver oom_adj: -16
	I0311 21:39:36.871194   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.371937   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.871974   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.371330   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.871791   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:34.949300   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:36.942990   70417 pod_ready.go:81] duration metric: took 4m0.000574155s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	E0311 21:39:36.943022   70417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:39:36.943043   70417 pod_ready.go:38] duration metric: took 4m12.043798271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:36.943093   70417 kubeadm.go:591] duration metric: took 4m20.121624644s to restartPrimaryControlPlane
	W0311 21:39:36.943155   70417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:39:36.943183   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:39:39.371531   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:39.872032   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.371717   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.871615   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.371577   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.871841   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.371050   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.871044   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.371446   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.871815   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.371243   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.872056   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.371993   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.871213   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.371397   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.871185   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.371541   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.871121   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.971855   70604 kubeadm.go:1106] duration metric: took 11.906533451s to wait for elevateKubeSystemPrivileges
	W0311 21:39:47.971895   70604 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:39:47.971902   70604 kubeadm.go:393] duration metric: took 5m16.305518086s to StartCluster
	I0311 21:39:47.971917   70604 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.972003   70604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:39:47.974339   70604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.974576   70604 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:39:47.976309   70604 out.go:177] * Verifying Kubernetes components...
	I0311 21:39:47.974638   70604 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:39:47.974819   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:39:47.977737   70604 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-743937"
	I0311 21:39:47.977746   70604 addons.go:69] Setting default-storageclass=true in profile "embed-certs-743937"
	I0311 21:39:47.977779   70604 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-743937"
	W0311 21:39:47.977790   70604 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:39:47.977815   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.977740   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:39:47.977779   70604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-743937"
	I0311 21:39:47.977750   70604 addons.go:69] Setting metrics-server=true in profile "embed-certs-743937"
	I0311 21:39:47.977943   70604 addons.go:234] Setting addon metrics-server=true in "embed-certs-743937"
	W0311 21:39:47.977957   70604 addons.go:243] addon metrics-server should already be in state true
	I0311 21:39:47.977985   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978270   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978275   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978419   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978449   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.994019   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I0311 21:39:47.994131   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0311 21:39:47.994484   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994514   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994964   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.994983   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995128   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.995143   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995288   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0311 21:39:47.995437   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995506   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995583   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:47.996051   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.996073   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.996516   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.996999   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.997024   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.997383   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.997834   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.997858   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.999381   70604 addons.go:234] Setting addon default-storageclass=true in "embed-certs-743937"
	W0311 21:39:47.999406   70604 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:39:47.999432   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.999794   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.999823   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.012063   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0311 21:39:48.012470   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.012899   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.012923   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.013267   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.013334   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0311 21:39:48.013484   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.013767   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.014259   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.014279   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.014556   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.014752   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.015486   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.017650   70604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:39:48.016591   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.019717   70604 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.019736   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:39:48.019758   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.021823   70604 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:39:48.023083   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:39:48.023095   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:39:48.023108   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.023306   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.023589   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0311 21:39:48.023916   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.023937   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.024255   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.024412   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.024533   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.024653   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.025517   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.025955   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.025967   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.026292   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.027365   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.027654   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:48.027692   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.027909   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.027965   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.028188   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.028369   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.028496   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.028603   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.048933   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0311 21:39:48.049338   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.049918   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.049929   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.050342   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.050502   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.052274   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.052523   70604 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.052537   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:39:48.052554   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.055438   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.055864   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.055881   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.056156   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.056334   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.056495   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.056608   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.175402   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:39:48.196199   70604 node_ready.go:35] waiting up to 6m0s for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215911   70604 node_ready.go:49] node "embed-certs-743937" has status "Ready":"True"
	I0311 21:39:48.215935   70604 node_ready.go:38] duration metric: took 19.701474ms for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215945   70604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.223525   70604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228887   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.228907   70604 pod_ready.go:81] duration metric: took 5.35597ms for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228917   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233811   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.233828   70604 pod_ready.go:81] duration metric: took 4.904721ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233839   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241831   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.241848   70604 pod_ready.go:81] duration metric: took 8.002663ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241857   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247609   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.247633   70604 pod_ready.go:81] duration metric: took 5.767693ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247641   70604 pod_ready.go:38] duration metric: took 31.680305ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.247656   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:39:48.247704   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:39:48.270201   70604 api_server.go:72] duration metric: took 295.596568ms to wait for apiserver process to appear ...
	I0311 21:39:48.270224   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:39:48.270242   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:39:48.277642   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:39:48.280487   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:39:48.280505   70604 api_server.go:131] duration metric: took 10.273204ms to wait for apiserver health ...
	I0311 21:39:48.280514   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:39:48.343718   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.346848   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:39:48.346864   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:39:48.400878   70604 system_pods.go:59] 4 kube-system pods found
	I0311 21:39:48.400907   70604 system_pods.go:61] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.400913   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.400919   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.400923   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.400931   70604 system_pods.go:74] duration metric: took 120.410888ms to wait for pod list to return data ...
	I0311 21:39:48.400940   70604 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:39:48.401062   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:39:48.401083   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:39:48.406115   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.492018   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.492042   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:39:48.581187   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.602016   70604 default_sa.go:45] found service account: "default"
	I0311 21:39:48.602046   70604 default_sa.go:55] duration metric: took 201.097662ms for default service account to be created ...
	I0311 21:39:48.602056   70604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:39:48.862115   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:48.862148   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending
	I0311 21:39:48.862155   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending
	I0311 21:39:48.862159   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.862164   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.862169   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.862176   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:48.862180   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.862199   70604 retry.go:31] will retry after 266.08114ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.139648   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.139675   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139682   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139689   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.139694   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.139700   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.139706   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.139710   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.139724   70604 retry.go:31] will retry after 293.420416ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.476384   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.476411   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476418   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476423   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.476429   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.476433   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.476438   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.476442   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.476456   70604 retry.go:31] will retry after 439.10065ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.927298   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.927337   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927348   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927357   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.927366   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.927373   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.927381   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.927389   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.927411   70604 retry.go:31] will retry after 396.604462ms: missing components: kube-dns, kube-proxy
	I0311 21:39:50.092631   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68647s)
	I0311 21:39:50.092698   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.092718   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093147   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093200   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093223   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093241   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093280   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.749522465s)
	I0311 21:39:50.093321   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093336   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093507   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093529   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093746   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.093759   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093773   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093797   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093805   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.094040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.094041   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.094067   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.111807   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.111831   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.112109   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.112127   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.112132   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.291598   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.710367476s)
	I0311 21:39:50.291651   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.291671   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292020   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292036   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292044   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.292050   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292287   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.292328   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292352   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292367   70604 addons.go:470] Verifying addon metrics-server=true in "embed-certs-743937"
	I0311 21:39:50.294192   70604 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:39:50.295405   70604 addons.go:505] duration metric: took 2.320766016s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:39:50.339623   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:50.339651   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339658   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339665   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:50.339671   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:50.339677   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:50.339682   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:50.339688   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:50.339695   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:50.339704   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:50.339728   70604 retry.go:31] will retry after 674.573171ms: missing components: kube-dns
	I0311 21:39:51.021666   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.021704   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021716   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021723   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.021731   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.021743   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.021754   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.021760   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.021772   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.021786   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:51.021805   70604 retry.go:31] will retry after 716.470399ms: missing components: kube-dns
	I0311 21:39:51.745786   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.745818   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745829   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745840   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.745849   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.745855   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.745861   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.745867   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.745876   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.745886   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:51.745904   70604 retry.go:31] will retry after 873.920018ms: missing components: kube-dns
	I0311 21:39:52.627896   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:52.627922   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Running
	I0311 21:39:52.627927   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Running
	I0311 21:39:52.627932   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:52.627936   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:52.627941   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:52.627944   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:52.627948   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:52.627954   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:52.627958   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:52.627966   70604 system_pods.go:126] duration metric: took 4.025903884s to wait for k8s-apps to be running ...
	I0311 21:39:52.627976   70604 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:39:52.628017   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:52.643356   70604 system_svc.go:56] duration metric: took 15.371853ms WaitForService to wait for kubelet
	I0311 21:39:52.643378   70604 kubeadm.go:576] duration metric: took 4.668777182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:39:52.643394   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:39:52.646844   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:39:52.646862   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:39:52.646871   70604 node_conditions.go:105] duration metric: took 3.47245ms to run NodePressure ...
	I0311 21:39:52.646881   70604 start.go:240] waiting for startup goroutines ...
	I0311 21:39:52.646891   70604 start.go:245] waiting for cluster config update ...
	I0311 21:39:52.646904   70604 start.go:254] writing updated cluster config ...
	I0311 21:39:52.647207   70604 ssh_runner.go:195] Run: rm -f paused
	I0311 21:39:52.697687   70604 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:39:52.699641   70604 out.go:177] * Done! kubectl is now configured to use "embed-certs-743937" cluster and "default" namespace by default
	I0311 21:40:09.411155   70417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.467938624s)
	I0311 21:40:09.411245   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:09.429951   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:40:09.442265   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:09.453883   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:09.453899   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:09.453934   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:40:09.465106   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:09.465161   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:09.476155   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:40:09.487366   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:09.487413   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:09.497877   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.508056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:09.508096   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.518709   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:40:09.529005   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:09.529039   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:09.539755   70417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:09.601265   70417 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:40:09.601399   70417 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:09.771387   70417 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:09.771548   70417 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:09.771653   70417 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:10.016610   70417 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:10.018526   70417 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:10.018613   70417 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:10.018670   70417 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:10.018752   70417 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:10.018830   70417 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:10.018926   70417 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:10.019019   70417 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:10.019436   70417 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:10.019924   70417 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:10.020435   70417 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:10.020949   70417 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:10.021470   70417 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:10.021550   70417 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:10.087827   70417 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:10.326702   70417 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:10.515476   70417 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:10.585573   70417 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:10.586277   70417 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:10.588784   70417 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:10.590786   70417 out.go:204]   - Booting up control plane ...
	I0311 21:40:10.590969   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:10.591080   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:10.591164   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:10.613086   70417 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:10.613187   70417 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:10.613224   70417 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:10.753737   70417 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:17.258016   70417 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503151 seconds
	I0311 21:40:17.258170   70417 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:40:17.276142   70417 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:40:17.805116   70417 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:40:17.805383   70417 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-766430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:40:18.323836   70417 kubeadm.go:309] [bootstrap-token] Using token: 9sjslg.sf5b1bfk3wp77z35
	I0311 21:40:18.325382   70417 out.go:204]   - Configuring RBAC rules ...
	I0311 21:40:18.325478   70417 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:40:18.331585   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:40:18.344341   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:40:18.348362   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:40:18.352181   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:40:18.363299   70417 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:40:18.377835   70417 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:40:18.612013   70417 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:40:18.755215   70417 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:40:18.755235   70417 kubeadm.go:309] 
	I0311 21:40:18.755300   70417 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:40:18.755314   70417 kubeadm.go:309] 
	I0311 21:40:18.755434   70417 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:40:18.755460   70417 kubeadm.go:309] 
	I0311 21:40:18.755490   70417 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:40:18.755571   70417 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:40:18.755636   70417 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:40:18.755647   70417 kubeadm.go:309] 
	I0311 21:40:18.755721   70417 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:40:18.755731   70417 kubeadm.go:309] 
	I0311 21:40:18.755794   70417 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:40:18.755804   70417 kubeadm.go:309] 
	I0311 21:40:18.755876   70417 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:40:18.755941   70417 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:40:18.756010   70417 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:40:18.756029   70417 kubeadm.go:309] 
	I0311 21:40:18.756152   70417 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:40:18.756267   70417 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:40:18.756277   70417 kubeadm.go:309] 
	I0311 21:40:18.756391   70417 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.756533   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:40:18.756578   70417 kubeadm.go:309] 	--control-plane 
	I0311 21:40:18.756585   70417 kubeadm.go:309] 
	I0311 21:40:18.756695   70417 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:40:18.756706   70417 kubeadm.go:309] 
	I0311 21:40:18.756844   70417 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.757021   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:40:18.759444   70417 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:40:18.759474   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:40:18.759489   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:40:18.761354   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:40:18.762676   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:40:18.793496   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:40:18.840426   70417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-766430 minikube.k8s.io/updated_at=2024_03_11T21_40_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=default-k8s-diff-port-766430 minikube.k8s.io/primary=true
	I0311 21:40:19.150012   70417 ops.go:34] apiserver oom_adj: -16
	I0311 21:40:19.150129   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:19.650947   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.150969   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.650687   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.150849   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.650356   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.150737   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.650225   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.150390   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.650650   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.151081   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.650689   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.150428   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.650265   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.150198   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.650610   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.150325   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.650794   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.150855   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.650819   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.150345   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.650746   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.150910   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.650742   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.790472   70417 kubeadm.go:1106] duration metric: took 11.95003413s to wait for elevateKubeSystemPrivileges
	W0311 21:40:30.790506   70417 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:40:30.790513   70417 kubeadm.go:393] duration metric: took 5m14.024392605s to StartCluster
	I0311 21:40:30.790527   70417 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.790630   70417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:40:30.792582   70417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.792843   70417 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:40:30.794425   70417 out.go:177] * Verifying Kubernetes components...
	I0311 21:40:30.792920   70417 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:40:30.793051   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:40:30.796119   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:40:30.796129   70417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796160   70417 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796171   70417 addons.go:243] addon metrics-server should already be in state true
	I0311 21:40:30.796197   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796121   70417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796127   70417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796237   70417 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796253   70417 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:40:30.796268   70417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766430"
	I0311 21:40:30.796278   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796663   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796694   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796699   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796722   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796777   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796807   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.812156   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0311 21:40:30.812601   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.813108   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.813138   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.813532   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.813995   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.814031   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.816427   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0311 21:40:30.816626   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0311 21:40:30.816863   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817015   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817365   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817385   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817532   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817557   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817905   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.817908   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.818696   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.819070   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.819100   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.822839   70417 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.822858   70417 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:40:30.822885   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.823188   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.823202   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.834007   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0311 21:40:30.834521   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.835017   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.835033   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.835418   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.835620   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.837838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.839548   70417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:40:30.838397   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0311 21:40:30.840244   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0311 21:40:30.840869   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:40:30.840885   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:40:30.840904   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.841295   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841345   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841877   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.841894   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.841994   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.842012   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.842246   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.842448   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842960   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.842985   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.844184   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.844406   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.845769   70417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:40:30.847105   70417 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:30.844838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.847124   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:40:30.847142   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.845110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.847151   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.847302   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.847424   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.847550   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.849856   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850205   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.850232   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.850575   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.850697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.850835   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.861464   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0311 21:40:30.861799   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.862252   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.862271   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.862655   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.862818   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.864692   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.864956   70417 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:30.864978   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:40:30.864996   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.867548   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.867980   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.868013   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.868140   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.868300   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.868433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.868558   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:31.037958   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:40:31.081173   70417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103697   70417 node_ready.go:49] node "default-k8s-diff-port-766430" has status "Ready":"True"
	I0311 21:40:31.103717   70417 node_ready.go:38] duration metric: took 22.519334ms for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103726   70417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:31.129595   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:31.184749   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:40:31.184771   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:40:31.194340   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:31.213567   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:40:31.213589   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:40:31.255647   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.255667   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:40:31.284917   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.309356   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:32.792293   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.597920266s)
	I0311 21:40:32.792337   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.792625   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.792686   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.792703   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.792714   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792724   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.793060   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.793086   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.793137   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811230   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.811254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.811583   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811587   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.811606   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.156126   70417 pod_ready.go:92] pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.156148   70417 pod_ready.go:81] duration metric: took 2.026531002s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.156156   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174226   70417 pod_ready.go:92] pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.174248   70417 pod_ready.go:81] duration metric: took 18.0858ms for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174257   70417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186296   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.186329   70417 pod_ready.go:81] duration metric: took 12.06396ms for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186344   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195902   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.195930   70417 pod_ready.go:81] duration metric: took 9.577334ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195945   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203134   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.203160   70417 pod_ready.go:81] duration metric: took 7.205172ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203174   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.449290   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.164324973s)
	I0311 21:40:33.449341   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.139948099s)
	I0311 21:40:33.449374   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449392   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449346   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449461   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449662   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449678   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449688   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.449795   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449810   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449823   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449836   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449886   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449905   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450213   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450256   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.450263   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.450272   70417 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-766430"
	I0311 21:40:33.453444   70417 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0311 21:40:33.454670   70417 addons.go:505] duration metric: took 2.661756652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0311 21:40:33.534893   70417 pod_ready.go:92] pod "kube-proxy-t4fwc" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.534915   70417 pod_ready.go:81] duration metric: took 331.733613ms for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.534924   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933950   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.933973   70417 pod_ready.go:81] duration metric: took 399.042085ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933981   70417 pod_ready.go:38] duration metric: took 2.830245804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:33.933994   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:40:33.934053   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:40:33.953607   70417 api_server.go:72] duration metric: took 3.160728268s to wait for apiserver process to appear ...
	I0311 21:40:33.953629   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:40:33.953650   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:40:33.959064   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:40:33.960101   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:40:33.960125   70417 api_server.go:131] duration metric: took 6.489682ms to wait for apiserver health ...
	I0311 21:40:33.960135   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:40:34.137026   70417 system_pods.go:59] 9 kube-system pods found
	I0311 21:40:34.137061   70417 system_pods.go:61] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.137079   70417 system_pods.go:61] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.137086   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.137093   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.137100   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.137105   70417 system_pods.go:61] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.137111   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.137121   70417 system_pods.go:61] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.137133   70417 system_pods.go:61] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.137147   70417 system_pods.go:74] duration metric: took 177.004603ms to wait for pod list to return data ...
	I0311 21:40:34.137201   70417 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:40:34.333563   70417 default_sa.go:45] found service account: "default"
	I0311 21:40:34.333589   70417 default_sa.go:55] duration metric: took 196.374123ms for default service account to be created ...
	I0311 21:40:34.333600   70417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:40:34.537376   70417 system_pods.go:86] 9 kube-system pods found
	I0311 21:40:34.537401   70417 system_pods.go:89] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.537406   70417 system_pods.go:89] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.537411   70417 system_pods.go:89] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.537415   70417 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.537420   70417 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.537423   70417 system_pods.go:89] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.537427   70417 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.537433   70417 system_pods.go:89] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.537438   70417 system_pods.go:89] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.537447   70417 system_pods.go:126] duration metric: took 203.840784ms to wait for k8s-apps to be running ...
	I0311 21:40:34.537453   70417 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:40:34.537493   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:34.555483   70417 system_svc.go:56] duration metric: took 18.021595ms WaitForService to wait for kubelet
	I0311 21:40:34.555511   70417 kubeadm.go:576] duration metric: took 3.76263503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:40:34.555534   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:40:34.735214   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:40:34.735238   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:40:34.735248   70417 node_conditions.go:105] duration metric: took 179.707447ms to run NodePressure ...
	I0311 21:40:34.735258   70417 start.go:240] waiting for startup goroutines ...
	I0311 21:40:34.735264   70417 start.go:245] waiting for cluster config update ...
	I0311 21:40:34.735274   70417 start.go:254] writing updated cluster config ...
	I0311 21:40:34.735539   70417 ssh_runner.go:195] Run: rm -f paused
	I0311 21:40:34.782710   70417 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:40:34.784627   70417 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-766430" cluster and "default" namespace by default
	I0311 21:40:56.380462   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:40:56.380539   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:40:56.382217   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:56.382264   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:56.382349   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:56.382450   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:56.382619   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:56.382712   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:56.384498   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:56.384579   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:56.384636   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:56.384766   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:56.384863   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:56.384967   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:56.385037   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:56.385139   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:56.385208   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:56.385281   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:56.385357   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:56.385408   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:56.385492   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:56.385567   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:56.385644   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:56.385769   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:56.385855   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:56.385962   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:56.386053   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:56.386104   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:56.386184   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:56.387594   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:56.387671   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:56.387738   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:56.387811   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:56.387914   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:56.388107   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:56.388182   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:40:56.388297   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388522   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388614   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388844   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388914   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389074   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389131   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389314   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389405   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389594   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389603   70908 kubeadm.go:309] 
	I0311 21:40:56.389653   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:40:56.389720   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:40:56.389732   70908 kubeadm.go:309] 
	I0311 21:40:56.389779   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:40:56.389811   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:40:56.389924   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:40:56.389933   70908 kubeadm.go:309] 
	I0311 21:40:56.390058   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:40:56.390109   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:40:56.390150   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:40:56.390159   70908 kubeadm.go:309] 
	I0311 21:40:56.390299   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:40:56.390395   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:40:56.390409   70908 kubeadm.go:309] 
	I0311 21:40:56.390512   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:40:56.390603   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:40:56.390702   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:40:56.390803   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:40:56.390833   70908 kubeadm.go:309] 
	W0311 21:40:56.390936   70908 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:40:56.390995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:40:56.941058   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:56.958276   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:56.970464   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:56.970493   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:56.970552   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:40:56.983314   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:56.983372   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:56.993791   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:40:57.004040   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:57.004098   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:57.014471   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.024751   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:57.024805   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.035389   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:40:57.045511   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:57.045556   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:57.056774   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:57.140620   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:57.140789   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:57.310076   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:57.310193   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:57.310280   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:57.506834   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:57.509261   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:57.509362   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:57.509446   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:57.509576   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:57.509669   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:57.509765   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:57.509839   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:57.509949   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:57.510004   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:57.510109   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:57.510231   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:57.510274   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:57.510361   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:57.585562   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:57.644460   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:57.784382   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:57.848952   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:57.867302   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:57.867791   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:57.867864   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:58.036523   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:58.039051   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:58.039176   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:58.054234   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:58.055548   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:58.057378   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:58.060167   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:41:38.062360   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:41:38.062886   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:38.063137   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:43.063592   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:43.063788   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:53.064505   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:53.064773   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:13.065744   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:13.065995   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.066718   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:53.067030   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.067070   70908 kubeadm.go:309] 
	I0311 21:42:53.067135   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:42:53.067191   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:42:53.067203   70908 kubeadm.go:309] 
	I0311 21:42:53.067259   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:42:53.067318   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:42:53.067456   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:42:53.067466   70908 kubeadm.go:309] 
	I0311 21:42:53.067590   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:42:53.067650   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:42:53.067724   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:42:53.067735   70908 kubeadm.go:309] 
	I0311 21:42:53.067889   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:42:53.068021   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:42:53.068036   70908 kubeadm.go:309] 
	I0311 21:42:53.068169   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:42:53.068297   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:42:53.068412   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:42:53.068512   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:42:53.068523   70908 kubeadm.go:309] 
	I0311 21:42:53.069455   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:42:53.069572   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:42:53.069682   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:42:53.069775   70908 kubeadm.go:393] duration metric: took 7m58.960224884s to StartCluster
	I0311 21:42:53.069833   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:42:53.069899   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:42:53.120459   70908 cri.go:89] found id: ""
	I0311 21:42:53.120486   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.120497   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:42:53.120505   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:42:53.120564   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:42:53.159639   70908 cri.go:89] found id: ""
	I0311 21:42:53.159667   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.159676   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:42:53.159682   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:42:53.159738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:42:53.199584   70908 cri.go:89] found id: ""
	I0311 21:42:53.199607   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.199614   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:42:53.199619   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:42:53.199676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:42:53.238868   70908 cri.go:89] found id: ""
	I0311 21:42:53.238901   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.238908   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:42:53.238917   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:42:53.238963   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:42:53.282172   70908 cri.go:89] found id: ""
	I0311 21:42:53.282205   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.282216   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:42:53.282225   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:42:53.282278   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:42:53.318450   70908 cri.go:89] found id: ""
	I0311 21:42:53.318481   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.318491   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:42:53.318499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:42:53.318559   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:42:53.360887   70908 cri.go:89] found id: ""
	I0311 21:42:53.360913   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.360923   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:42:53.360930   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:42:53.361027   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:42:53.414181   70908 cri.go:89] found id: ""
	I0311 21:42:53.414209   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.414220   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:42:53.414232   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:42:53.414247   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:42:53.478658   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:42:53.478689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:42:53.494577   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:42:53.494604   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:42:53.586460   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:42:53.586483   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:42:53.586500   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:42:53.697218   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:42:53.697251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0311 21:42:53.746291   70908 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:42:53.746336   70908 out.go:239] * 
	W0311 21:42:53.746388   70908 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.746409   70908 out.go:239] * 
	W0311 21:42:53.747362   70908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:42:53.750888   70908 out.go:177] 
	W0311 21:42:53.752146   70908 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.752211   70908 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:42:53.752239   70908 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:42:53.753832   70908 out.go:177] 
	
	
	==> CRI-O <==
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.722048161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193734722024070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ed5b066-92d4-4ba9-8b9f-8cd3a8575426 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.722878617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26afc63f-2770-4728-842c-487e6ecf00c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.722930560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26afc63f-2770-4728-842c-487e6ecf00c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.723135371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9,PodSandboxId:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190961600303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,},Annotations:map[string]string{io.kubernetes.container.hash: 2b42a678,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8,PodSandboxId:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190977082194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,},Annotations:map[string]string{io.kubernetes.container.hash: ac3c9c5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5,PodSandboxId:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1710193190469893544,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8016b8d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1,PodSandboxId:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710193188958511076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,},Annotations:map[string]string{io.kubernetes.container.hash: 710f9e96,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2,PodSandboxId:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193169049097048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,},Annotations:map[string]string{io.kubernetes.container.hash: dfd8d50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575,PodSandboxId:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710193169107290776,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f,PodSandboxId:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193169046953603,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,},Annotations:map[string]string{io.kubernetes.container.hash: 9d16a9fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498,PodSandboxId:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193168998813148,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26afc63f-2770-4728-842c-487e6ecf00c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.768352885Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b5b96a7-3f37-4d08-8604-cc014cbfc8d1 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.768558515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b5b96a7-3f37-4d08-8604-cc014cbfc8d1 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.769661217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83da9211-41b1-496d-83bc-dd0aec0214c3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.770225797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193734770197649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83da9211-41b1-496d-83bc-dd0aec0214c3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.770942015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1d633bf-195d-45bf-bced-87d8aea03c6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.770992653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1d633bf-195d-45bf-bced-87d8aea03c6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.771187899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9,PodSandboxId:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190961600303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,},Annotations:map[string]string{io.kubernetes.container.hash: 2b42a678,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8,PodSandboxId:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190977082194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,},Annotations:map[string]string{io.kubernetes.container.hash: ac3c9c5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5,PodSandboxId:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1710193190469893544,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8016b8d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1,PodSandboxId:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710193188958511076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,},Annotations:map[string]string{io.kubernetes.container.hash: 710f9e96,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2,PodSandboxId:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193169049097048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,},Annotations:map[string]string{io.kubernetes.container.hash: dfd8d50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575,PodSandboxId:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710193169107290776,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f,PodSandboxId:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193169046953603,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,},Annotations:map[string]string{io.kubernetes.container.hash: 9d16a9fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498,PodSandboxId:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193168998813148,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1d633bf-195d-45bf-bced-87d8aea03c6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.815960273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=694a2981-8a55-42de-aaa8-118a70f4f3ba name=/runtime.v1.RuntimeService/Version
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.816023515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=694a2981-8a55-42de-aaa8-118a70f4f3ba name=/runtime.v1.RuntimeService/Version
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.817591989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6c2b546-5f6b-4207-9201-1bf9be4d1a81 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.818037073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193734818014454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6c2b546-5f6b-4207-9201-1bf9be4d1a81 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.818898176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fef65ecc-a399-4ee3-ab7d-23738d4e3912 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.818950063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fef65ecc-a399-4ee3-ab7d-23738d4e3912 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.819136695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9,PodSandboxId:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190961600303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,},Annotations:map[string]string{io.kubernetes.container.hash: 2b42a678,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8,PodSandboxId:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190977082194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,},Annotations:map[string]string{io.kubernetes.container.hash: ac3c9c5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5,PodSandboxId:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1710193190469893544,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8016b8d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1,PodSandboxId:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710193188958511076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,},Annotations:map[string]string{io.kubernetes.container.hash: 710f9e96,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2,PodSandboxId:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193169049097048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,},Annotations:map[string]string{io.kubernetes.container.hash: dfd8d50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575,PodSandboxId:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710193169107290776,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f,PodSandboxId:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193169046953603,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,},Annotations:map[string]string{io.kubernetes.container.hash: 9d16a9fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498,PodSandboxId:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193168998813148,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fef65ecc-a399-4ee3-ab7d-23738d4e3912 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.859361707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb661c87-2581-4682-ad04-2526dc7cdebb name=/runtime.v1.RuntimeService/Version
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.859487465Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb661c87-2581-4682-ad04-2526dc7cdebb name=/runtime.v1.RuntimeService/Version
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.860498238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=168653bb-0bec-4201-b621-72474d00b40c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.860933704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193734860853267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=168653bb-0bec-4201-b621-72474d00b40c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.861638670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=410709cc-93b0-409d-a9a7-b0bb093d9f2f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.861686148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=410709cc-93b0-409d-a9a7-b0bb093d9f2f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:48:54 embed-certs-743937 crio[686]: time="2024-03-11 21:48:54.861891312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9,PodSandboxId:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190961600303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,},Annotations:map[string]string{io.kubernetes.container.hash: 2b42a678,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8,PodSandboxId:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190977082194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,},Annotations:map[string]string{io.kubernetes.container.hash: ac3c9c5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5,PodSandboxId:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1710193190469893544,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8016b8d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1,PodSandboxId:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710193188958511076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,},Annotations:map[string]string{io.kubernetes.container.hash: 710f9e96,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2,PodSandboxId:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193169049097048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,},Annotations:map[string]string{io.kubernetes.container.hash: dfd8d50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575,PodSandboxId:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710193169107290776,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f,PodSandboxId:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193169046953603,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,},Annotations:map[string]string{io.kubernetes.container.hash: 9d16a9fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498,PodSandboxId:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193168998813148,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=410709cc-93b0-409d-a9a7-b0bb093d9f2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4290fa687c68e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   4f033c8242f61       coredns-5dd5756b68-hct77
	7c735d180d5d0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   43387911d61cb       coredns-5dd5756b68-58ct4
	b933a93694d75       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a0b2d2af8dc36       storage-provisioner
	11079c6b59c67       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   cd4ad099890fe       kube-proxy-7xmlm
	46ba015fd640f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   44c56a97476a8       kube-scheduler-embed-certs-743937
	0fe64dcf976f8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   050e29796e725       etcd-embed-certs-743937
	4204959d26a52       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   239a8b464db4f       kube-apiserver-embed-certs-743937
	33ce09219ccdf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   793cb1b96101c       kube-controller-manager-embed-certs-743937
	
	
	==> coredns [4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-743937
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-743937
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=embed-certs-743937
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T21_39_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:39:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-743937
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:48:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:45:02 +0000   Mon, 11 Mar 2024 21:39:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:45:02 +0000   Mon, 11 Mar 2024 21:39:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:45:02 +0000   Mon, 11 Mar 2024 21:39:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:45:02 +0000   Mon, 11 Mar 2024 21:39:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.114
	  Hostname:    embed-certs-743937
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7be0937769334b2a86e68256de27730e
	  System UUID:                7be09377-6933-4b2a-86e6-8256de27730e
	  Boot ID:                    c4b5ec1a-ad68-4b58-9017-148856cd6f08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-58ct4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-5dd5756b68-hct77                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-743937                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-743937             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-743937    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-7xmlm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-743937             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-9z7nz               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node embed-certs-743937 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node embed-certs-743937 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node embed-certs-743937 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-743937 event: Registered Node embed-certs-743937 in Controller
	
	
	==> dmesg <==
	[  +0.056310] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047046] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.551317] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.498204] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.708342] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.912857] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.061955] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058452] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.200767] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.152643] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.312406] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +6.201291] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.070436] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.882269] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +6.626191] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.971657] kauditd_printk_skb: 74 callbacks suppressed
	[Mar11 21:39] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.909170] systemd-fstab-generator[3420]: Ignoring "noauto" option for root device
	[  +7.788719] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.090633] kauditd_printk_skb: 57 callbacks suppressed
	[ +12.386757] systemd-fstab-generator[3944]: Ignoring "noauto" option for root device
	[  +0.091140] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.150786] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2] <==
	{"level":"info","ts":"2024-03-11T21:39:29.660277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 switched to configuration voters=(17357627813233571301)"}
	{"level":"info","ts":"2024-03-11T21:39:29.660518Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"659e1302ad88139d","local-member-id":"f0e2ae880f3a35e5","added-peer-id":"f0e2ae880f3a35e5","added-peer-peer-urls":["https://192.168.50.114:2380"]}
	{"level":"info","ts":"2024-03-11T21:39:29.668718Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T21:39:29.668992Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f0e2ae880f3a35e5","initial-advertise-peer-urls":["https://192.168.50.114:2380"],"listen-peer-urls":["https://192.168.50.114:2380"],"advertise-client-urls":["https://192.168.50.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T21:39:29.669048Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T21:39:29.669162Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2024-03-11T21:39:29.669192Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2024-03-11T21:39:30.502716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-11T21:39:30.50279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-11T21:39:30.502808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 received MsgPreVoteResp from f0e2ae880f3a35e5 at term 1"}
	{"level":"info","ts":"2024-03-11T21:39:30.50282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T21:39:30.502827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 received MsgVoteResp from f0e2ae880f3a35e5 at term 2"}
	{"level":"info","ts":"2024-03-11T21:39:30.502862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became leader at term 2"}
	{"level":"info","ts":"2024-03-11T21:39:30.502869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0e2ae880f3a35e5 elected leader f0e2ae880f3a35e5 at term 2"}
	{"level":"info","ts":"2024-03-11T21:39:30.504561Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:39:30.506082Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f0e2ae880f3a35e5","local-member-attributes":"{Name:embed-certs-743937 ClientURLs:[https://192.168.50.114:2379]}","request-path":"/0/members/f0e2ae880f3a35e5/attributes","cluster-id":"659e1302ad88139d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T21:39:30.506704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:39:30.508434Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T21:39:30.508493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T21:39:30.509499Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"659e1302ad88139d","local-member-id":"f0e2ae880f3a35e5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:39:30.509714Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:39:30.507047Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:39:30.512632Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T21:39:30.512939Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:39:30.513709Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.114:2379"}
	
	
	==> kernel <==
	 21:48:55 up 14 min,  0 users,  load average: 0.11, 0.12, 0.12
	Linux embed-certs-743937 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f] <==
	W0311 21:44:33.291691       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:44:33.292003       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:44:33.292071       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:44:33.291822       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:44:33.292242       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:44:33.293485       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:45:32.150124       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:45:33.292965       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:45:33.293031       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:45:33.293039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:45:33.294357       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:45:33.294480       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:45:33.294492       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:46:32.150035       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 21:47:32.150109       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:47:33.293298       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:47:33.293570       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:47:33.293603       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:47:33.294778       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:47:33.294867       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:47:33.294881       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:48:32.150725       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498] <==
	I0311 21:43:18.298737       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:43:47.846832       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:43:48.308284       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:44:17.857892       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:44:18.317328       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:44:47.864465       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:44:48.326487       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:45:17.872180       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:45:18.335020       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:45:47.880547       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:45:48.347985       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:45:55.947547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="354.843µs"
	I0311 21:46:08.942513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="239.712µs"
	E0311 21:46:17.885734       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:46:18.357823       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:46:47.891686       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:46:48.366893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:47:17.901738       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:47:18.376801       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:47:47.907650       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:47:48.385782       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:48:17.913261       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:48:18.395799       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:48:47.919600       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:48:48.405829       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1] <==
	I0311 21:39:49.257780       1 server_others.go:69] "Using iptables proxy"
	I0311 21:39:49.270995       1 node.go:141] Successfully retrieved node IP: 192.168.50.114
	I0311 21:39:49.328282       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 21:39:49.328348       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:39:49.331254       1 server_others.go:152] "Using iptables Proxier"
	I0311 21:39:49.331898       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:39:49.332101       1 server.go:846] "Version info" version="v1.28.4"
	I0311 21:39:49.332141       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:39:49.333562       1 config.go:188] "Starting service config controller"
	I0311 21:39:49.337601       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:39:49.337676       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:39:49.337683       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:39:49.340436       1 config.go:315] "Starting node config controller"
	I0311 21:39:49.340516       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:39:49.445436       1 shared_informer.go:318] Caches are synced for node config
	I0311 21:39:49.445460       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:39:49.445486       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575] <==
	W0311 21:39:32.319326       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 21:39:32.320086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 21:39:32.319359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 21:39:32.319564       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 21:39:32.319615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 21:39:32.320654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 21:39:32.320606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 21:39:32.320640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 21:39:33.176303       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 21:39:33.177616       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:39:33.202743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 21:39:33.202847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 21:39:33.224577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 21:39:33.224713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 21:39:33.225339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 21:39:33.225486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 21:39:33.236944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 21:39:33.237140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 21:39:33.300836       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 21:39:33.300889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 21:39:33.354617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 21:39:33.354856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 21:39:33.562973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 21:39:33.563109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0311 21:39:36.113285       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:46:35 embed-certs-743937 kubelet[3752]: E0311 21:46:35.951345    3752 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:46:35 embed-certs-743937 kubelet[3752]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:46:35 embed-certs-743937 kubelet[3752]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:46:35 embed-certs-743937 kubelet[3752]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:46:35 embed-certs-743937 kubelet[3752]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:46:50 embed-certs-743937 kubelet[3752]: E0311 21:46:50.924642    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:47:05 embed-certs-743937 kubelet[3752]: E0311 21:47:05.925655    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:47:17 embed-certs-743937 kubelet[3752]: E0311 21:47:17.924808    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:47:28 embed-certs-743937 kubelet[3752]: E0311 21:47:28.925215    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:47:35 embed-certs-743937 kubelet[3752]: E0311 21:47:35.952237    3752 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:47:35 embed-certs-743937 kubelet[3752]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:47:35 embed-certs-743937 kubelet[3752]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:47:35 embed-certs-743937 kubelet[3752]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:47:35 embed-certs-743937 kubelet[3752]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:47:39 embed-certs-743937 kubelet[3752]: E0311 21:47:39.926482    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:47:51 embed-certs-743937 kubelet[3752]: E0311 21:47:51.929078    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:48:05 embed-certs-743937 kubelet[3752]: E0311 21:48:05.925910    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:48:20 embed-certs-743937 kubelet[3752]: E0311 21:48:20.925258    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:48:32 embed-certs-743937 kubelet[3752]: E0311 21:48:32.925728    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:48:35 embed-certs-743937 kubelet[3752]: E0311 21:48:35.950727    3752 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:48:35 embed-certs-743937 kubelet[3752]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:48:35 embed-certs-743937 kubelet[3752]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:48:35 embed-certs-743937 kubelet[3752]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:48:35 embed-certs-743937 kubelet[3752]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:48:44 embed-certs-743937 kubelet[3752]: E0311 21:48:44.925011    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	
	
	==> storage-provisioner [b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5] <==
	I0311 21:39:50.720018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 21:39:50.733845       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 21:39:50.733942       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 21:39:50.778995       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 21:39:50.779217       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-743937_5f22cdaf-7bd7-4fd5-aeea-671837d1c42a!
	I0311 21:39:50.779953       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e366ed60-4e73-471c-93f6-807bd709950c", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-743937_5f22cdaf-7bd7-4fd5-aeea-671837d1c42a became leader
	I0311 21:39:50.879491       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-743937_5f22cdaf-7bd7-4fd5-aeea-671837d1c42a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-743937 -n embed-certs-743937
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-743937 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9z7nz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-743937 describe pod metrics-server-57f55c9bc5-9z7nz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-743937 describe pod metrics-server-57f55c9bc5-9z7nz: exit status 1 (64.417481ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9z7nz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-743937 describe pod metrics-server-57f55c9bc5-9z7nz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0311 21:41:23.916018   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:41:28.681625   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:41:58.807871   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 21:42:37.144860   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
E0311 21:42:38.935468   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 21:42:46.963046   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:42:51.726786   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-11 21:49:35.354692727 +0000 UTC m=+5983.526367030
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-766430 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-766430 logs -n 25: (2.12986717s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-427678 sudo cat                              | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo find                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo crio                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-427678                                       | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-124446 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | disable-driver-mounts-124446                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:26 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-766430  | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-324578             | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:30:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:30:01.044166   70908 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:30:01.044254   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044259   70908 out.go:304] Setting ErrFile to fd 2...
	I0311 21:30:01.044263   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044451   70908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:30:01.044970   70908 out.go:298] Setting JSON to false
	I0311 21:30:01.045838   70908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7950,"bootTime":1710184651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:30:01.045894   70908 start.go:139] virtualization: kvm guest
	I0311 21:30:01.048311   70908 out.go:177] * [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:30:01.050003   70908 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:30:01.050011   70908 notify.go:220] Checking for updates...
	I0311 21:30:01.051498   70908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:30:01.052999   70908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:30:01.054439   70908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:30:01.055768   70908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:30:01.057137   70908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:30:01.058760   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:30:01.059167   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.059205   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.073734   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0311 21:30:01.074087   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.074586   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.074618   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.074966   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.075173   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.077005   70908 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 21:30:01.078583   70908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:30:01.078879   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.078914   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.093226   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0311 21:30:01.093614   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.094174   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.094243   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.094616   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.094805   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.128302   70908 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:30:01.129965   70908 start.go:297] selected driver: kvm2
	I0311 21:30:01.129991   70908 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.130113   70908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:30:01.131050   70908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.131115   70908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:30:01.145452   70908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:30:01.145782   70908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:30:01.145811   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:30:01.145819   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:30:01.145863   70908 start.go:340] cluster config:
	{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.145954   70908 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.147725   70908 out.go:177] * Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	I0311 21:30:01.148916   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:30:01.148943   70908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:30:01.148955   70908 cache.go:56] Caching tarball of preloaded images
	I0311 21:30:01.149022   70908 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:30:01.149032   70908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:30:01.149114   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:30:01.149263   70908 start.go:360] acquireMachinesLock for old-k8s-version-239315: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:30:05.352968   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:08.425086   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:14.504922   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:17.577080   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:23.656996   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:26.729009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:32.809042   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:35.881008   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:41.960992   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:45.033096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:51.112925   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:54.184989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:00.265058   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:03.337012   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:09.416960   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:12.489005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:18.569021   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:21.640990   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:27.721019   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:30.793040   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:36.872985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:39.945005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:46.025035   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:49.096988   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:55.176985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:58.249009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:04.328981   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:07.401006   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:13.480986   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:16.552965   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:22.632997   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:25.705064   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:31.784993   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:34.857027   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:40.937002   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:44.008989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:50.088959   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:53.161092   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:59.241045   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:02.313084   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:08.393056   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:11.465079   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:17.545057   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:20.617082   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:26.697000   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:29.768926   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:35.849024   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:38.921096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:41.925305   70458 start.go:364] duration metric: took 4m36.419231792s to acquireMachinesLock for "no-preload-324578"
	I0311 21:33:41.925360   70458 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:33:41.925368   70458 fix.go:54] fixHost starting: 
	I0311 21:33:41.925768   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:33:41.925798   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:33:41.940654   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0311 21:33:41.941130   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:33:41.941619   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:33:41.941646   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:33:41.942045   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:33:41.942209   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:33:41.942370   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:33:41.944009   70458 fix.go:112] recreateIfNeeded on no-preload-324578: state=Stopped err=<nil>
	I0311 21:33:41.944030   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	W0311 21:33:41.944231   70458 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:33:41.946020   70458 out.go:177] * Restarting existing kvm2 VM for "no-preload-324578" ...
	I0311 21:33:41.922711   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:33:41.922754   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923131   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:33:41.923158   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923430   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:33:41.925178   70417 machine.go:97] duration metric: took 4m37.414792129s to provisionDockerMachine
	I0311 21:33:41.925213   70417 fix.go:56] duration metric: took 4m37.435982654s for fixHost
	I0311 21:33:41.925219   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 4m37.436000925s
	W0311 21:33:41.925242   70417 start.go:713] error starting host: provision: host is not running
	W0311 21:33:41.925330   70417 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0311 21:33:41.925343   70417 start.go:728] Will try again in 5 seconds ...
	I0311 21:33:41.947495   70458 main.go:141] libmachine: (no-preload-324578) Calling .Start
	I0311 21:33:41.947676   70458 main.go:141] libmachine: (no-preload-324578) Ensuring networks are active...
	I0311 21:33:41.948386   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network default is active
	I0311 21:33:41.948724   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network mk-no-preload-324578 is active
	I0311 21:33:41.949117   70458 main.go:141] libmachine: (no-preload-324578) Getting domain xml...
	I0311 21:33:41.949876   70458 main.go:141] libmachine: (no-preload-324578) Creating domain...
	I0311 21:33:43.129733   70458 main.go:141] libmachine: (no-preload-324578) Waiting to get IP...
	I0311 21:33:43.130601   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.131006   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.131053   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.130975   71444 retry.go:31] will retry after 209.203314ms: waiting for machine to come up
	I0311 21:33:43.341724   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.342324   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.342361   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.342279   71444 retry.go:31] will retry after 375.396917ms: waiting for machine to come up
	I0311 21:33:43.718906   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.719329   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.719351   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.719288   71444 retry.go:31] will retry after 428.365393ms: waiting for machine to come up
	I0311 21:33:44.148895   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.149334   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.149358   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.149284   71444 retry.go:31] will retry after 561.478535ms: waiting for machine to come up
	I0311 21:33:44.712065   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.712548   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.712576   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.712465   71444 retry.go:31] will retry after 700.993236ms: waiting for machine to come up
	I0311 21:33:46.926379   70417 start.go:360] acquireMachinesLock for default-k8s-diff-port-766430: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:33:45.415695   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:45.416242   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:45.416276   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:45.416215   71444 retry.go:31] will retry after 809.474202ms: waiting for machine to come up
	I0311 21:33:46.227098   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:46.227573   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:46.227608   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:46.227520   71444 retry.go:31] will retry after 1.075187328s: waiting for machine to come up
	I0311 21:33:47.303981   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:47.304454   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:47.304483   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:47.304397   71444 retry.go:31] will retry after 1.145290319s: waiting for machine to come up
	I0311 21:33:48.451871   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:48.452316   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:48.452350   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:48.452267   71444 retry.go:31] will retry after 1.172261063s: waiting for machine to come up
	I0311 21:33:49.626502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:49.627067   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:49.627089   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:49.627023   71444 retry.go:31] will retry after 2.201479026s: waiting for machine to come up
	I0311 21:33:51.831519   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:51.831972   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:51.832008   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:51.831905   71444 retry.go:31] will retry after 2.888101699s: waiting for machine to come up
	I0311 21:33:54.721322   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:54.721753   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:54.721773   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:54.721722   71444 retry.go:31] will retry after 3.512655296s: waiting for machine to come up
	I0311 21:33:58.235767   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:58.236180   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:58.236219   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:58.236141   71444 retry.go:31] will retry after 3.975760652s: waiting for machine to come up
	I0311 21:34:03.525918   70604 start.go:364] duration metric: took 4m44.449252209s to acquireMachinesLock for "embed-certs-743937"
	I0311 21:34:03.525995   70604 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:03.526008   70604 fix.go:54] fixHost starting: 
	I0311 21:34:03.526428   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:03.526470   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:03.542427   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0311 21:34:03.542857   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:03.543292   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:34:03.543317   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:03.543616   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:03.543806   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:03.543991   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:34:03.545366   70604 fix.go:112] recreateIfNeeded on embed-certs-743937: state=Stopped err=<nil>
	I0311 21:34:03.545391   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	W0311 21:34:03.545540   70604 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:03.548158   70604 out.go:177] * Restarting existing kvm2 VM for "embed-certs-743937" ...
	I0311 21:34:03.549803   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Start
	I0311 21:34:03.549966   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring networks are active...
	I0311 21:34:03.550712   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network default is active
	I0311 21:34:03.551124   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network mk-embed-certs-743937 is active
	I0311 21:34:03.551528   70604 main.go:141] libmachine: (embed-certs-743937) Getting domain xml...
	I0311 21:34:03.552226   70604 main.go:141] libmachine: (embed-certs-743937) Creating domain...
	I0311 21:34:02.213709   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214152   70458 main.go:141] libmachine: (no-preload-324578) Found IP for machine: 192.168.39.36
	I0311 21:34:02.214181   70458 main.go:141] libmachine: (no-preload-324578) Reserving static IP address...
	I0311 21:34:02.214196   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has current primary IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214631   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.214655   70458 main.go:141] libmachine: (no-preload-324578) DBG | skip adding static IP to network mk-no-preload-324578 - found existing host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"}
	I0311 21:34:02.214666   70458 main.go:141] libmachine: (no-preload-324578) Reserved static IP address: 192.168.39.36
	I0311 21:34:02.214680   70458 main.go:141] libmachine: (no-preload-324578) Waiting for SSH to be available...
	I0311 21:34:02.214704   70458 main.go:141] libmachine: (no-preload-324578) DBG | Getting to WaitForSSH function...
	I0311 21:34:02.216798   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217068   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.217111   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217285   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH client type: external
	I0311 21:34:02.217316   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa (-rw-------)
	I0311 21:34:02.217356   70458 main.go:141] libmachine: (no-preload-324578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:02.217374   70458 main.go:141] libmachine: (no-preload-324578) DBG | About to run SSH command:
	I0311 21:34:02.217389   70458 main.go:141] libmachine: (no-preload-324578) DBG | exit 0
	I0311 21:34:02.340837   70458 main.go:141] libmachine: (no-preload-324578) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:02.341154   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetConfigRaw
	I0311 21:34:02.341752   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.344368   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344756   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.344791   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344942   70458 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/config.json ...
	I0311 21:34:02.345142   70458 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:02.345159   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:02.345353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.347647   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348001   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.348029   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348118   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.348284   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348432   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.348704   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.348913   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.348925   70458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:02.457273   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:02.457298   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457523   70458 buildroot.go:166] provisioning hostname "no-preload-324578"
	I0311 21:34:02.457554   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457757   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.460347   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460658   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.460688   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460913   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.461126   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461286   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461415   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.461574   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.461758   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.461775   70458 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-324578 && echo "no-preload-324578" | sudo tee /etc/hostname
	I0311 21:34:02.583388   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-324578
	
	I0311 21:34:02.583414   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.586043   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586399   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.586431   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586592   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.586799   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.586957   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.587084   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.587271   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.587433   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.587449   70458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-324578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-324578/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-324578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:02.702365   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:02.702399   70458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:02.702420   70458 buildroot.go:174] setting up certificates
	I0311 21:34:02.702431   70458 provision.go:84] configureAuth start
	I0311 21:34:02.702439   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.702725   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.705459   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.705882   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.705902   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.706048   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.708166   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708476   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.708502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708618   70458 provision.go:143] copyHostCerts
	I0311 21:34:02.708675   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:02.708684   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:02.708764   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:02.708875   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:02.708885   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:02.708911   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:02.708977   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:02.708984   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:02.709005   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:02.709063   70458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.no-preload-324578 san=[127.0.0.1 192.168.39.36 localhost minikube no-preload-324578]
	I0311 21:34:02.823423   70458 provision.go:177] copyRemoteCerts
	I0311 21:34:02.823484   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:02.823508   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.826221   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826538   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.826584   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826743   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.826974   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.827158   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.827344   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:02.912138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:02.938138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:34:02.967391   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:02.992208   70458 provision.go:87] duration metric: took 289.765831ms to configureAuth
	I0311 21:34:02.992232   70458 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:02.992376   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:02.992440   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.994808   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995124   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.995154   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995315   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.995490   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995640   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995818   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.995997   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.996187   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.996202   70458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:03.283611   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:03.283643   70458 machine.go:97] duration metric: took 938.487892ms to provisionDockerMachine
	I0311 21:34:03.283655   70458 start.go:293] postStartSetup for "no-preload-324578" (driver="kvm2")
	I0311 21:34:03.283667   70458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:03.283681   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.284008   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:03.284043   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.286802   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287220   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.287262   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287379   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.287546   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.287731   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.287930   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.372563   70458 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:03.377151   70458 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:03.377171   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:03.377225   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:03.377291   70458 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:03.377377   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:03.387792   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:03.412721   70458 start.go:296] duration metric: took 129.055927ms for postStartSetup
	I0311 21:34:03.412770   70458 fix.go:56] duration metric: took 21.487401487s for fixHost
	I0311 21:34:03.412790   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.415209   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415507   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.415533   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415668   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.415866   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416035   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416179   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.416353   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:03.416502   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:03.416513   70458 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:03.525759   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192843.475283818
	
	I0311 21:34:03.525781   70458 fix.go:216] guest clock: 1710192843.475283818
	I0311 21:34:03.525790   70458 fix.go:229] Guest: 2024-03-11 21:34:03.475283818 +0000 UTC Remote: 2024-03-11 21:34:03.412775346 +0000 UTC m=+298.052241307 (delta=62.508472ms)
	I0311 21:34:03.525815   70458 fix.go:200] guest clock delta is within tolerance: 62.508472ms
	I0311 21:34:03.525833   70458 start.go:83] releasing machines lock for "no-preload-324578", held for 21.600490138s
	I0311 21:34:03.525866   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.526157   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:03.528771   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529117   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.529143   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529272   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529721   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529897   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529978   70458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:03.530022   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.530124   70458 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:03.530151   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.532450   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532624   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532813   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.532843   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533001   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533010   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.533034   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533171   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533197   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533350   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533504   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533506   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.533639   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.614855   70458 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:03.638835   70458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:03.787832   70458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:03.794627   70458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:03.794677   70458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:03.811771   70458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:03.811790   70458 start.go:494] detecting cgroup driver to use...
	I0311 21:34:03.811845   70458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:03.829561   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:03.844536   70458 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:03.844582   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:03.859811   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:03.875041   70458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:03.991456   70458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:04.174783   70458 docker.go:233] disabling docker service ...
	I0311 21:34:04.174848   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:04.192524   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:04.206906   70458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:04.340047   70458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:04.455686   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:04.472512   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:04.495487   70458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:04.495550   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.506921   70458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:04.506997   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.519408   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.531418   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.543684   70458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:04.555846   70458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:04.567610   70458 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:04.567658   70458 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:04.583015   70458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:04.594515   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:04.715185   70458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:04.872750   70458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:04.872848   70458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:04.878207   70458 start.go:562] Will wait 60s for crictl version
	I0311 21:34:04.878250   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.882436   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:04.921007   70458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:04.921079   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.959326   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.997595   70458 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:34:04.999092   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:05.002092   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002526   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:05.002566   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002790   70458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:05.007758   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:05.023330   70458 kubeadm.go:877] updating cluster {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:05.023430   70458 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:34:05.023461   70458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:05.063043   70458 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:34:05.063071   70458 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:05.063161   70458 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.063170   70458 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.063183   70458 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.063190   70458 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.063233   70458 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0311 21:34:05.063171   70458 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.063272   70458 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.063307   70458 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065013   70458 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.065019   70458 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.065020   70458 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.065045   70458 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0311 21:34:05.065017   70458 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.065018   70458 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065064   70458 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.065365   70458 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.209182   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.211431   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.220663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.230965   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.237859   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0311 21:34:05.260820   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.288596   70458 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0311 21:34:05.288651   70458 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.288697   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.324896   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.342987   70458 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0311 21:34:05.343030   70458 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.343080   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.371663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.377262   70458 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0311 21:34:05.377306   70458 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.377349   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.792889   70604 main.go:141] libmachine: (embed-certs-743937) Waiting to get IP...
	I0311 21:34:04.793678   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:04.794097   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:04.794152   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:04.794064   71579 retry.go:31] will retry after 281.522937ms: waiting for machine to come up
	I0311 21:34:05.077518   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.077856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.077889   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.077814   71579 retry.go:31] will retry after 303.836522ms: waiting for machine to come up
	I0311 21:34:05.383244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.383796   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.383839   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.383758   71579 retry.go:31] will retry after 333.172379ms: waiting for machine to come up
	I0311 21:34:05.718117   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.718603   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.718630   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.718562   71579 retry.go:31] will retry after 469.046827ms: waiting for machine to come up
	I0311 21:34:06.189304   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.189748   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.189777   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.189705   71579 retry.go:31] will retry after 636.781259ms: waiting for machine to come up
	I0311 21:34:06.828672   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.829136   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.829174   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.829078   71579 retry.go:31] will retry after 758.609427ms: waiting for machine to come up
	I0311 21:34:07.589134   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:07.589490   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:07.589513   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:07.589466   71579 retry.go:31] will retry after 990.575872ms: waiting for machine to come up
	I0311 21:34:08.581971   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:08.582312   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:08.582344   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:08.582290   71579 retry.go:31] will retry after 1.142377902s: waiting for machine to come up
	I0311 21:34:05.421288   70458 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0311 21:34:05.421340   70458 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.421390   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473450   70458 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0311 21:34:05.473497   70458 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.473527   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.473545   70458 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0311 21:34:05.473584   70458 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.473603   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.473639   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473663   70458 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0311 21:34:05.473701   70458 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.473707   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.473730   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473766   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.569510   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.569615   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.578915   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0311 21:34:05.578979   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579007   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:05.579029   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.579077   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579117   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.579158   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.579209   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0311 21:34:05.579272   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:05.584413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0311 21:34:05.584425   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.584458   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.679191   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 21:34:05.679259   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0311 21:34:05.679288   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:05.679337   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679368   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0311 21:34:05.679369   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0311 21:34:05.679414   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679428   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0311 21:34:05.679485   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:07.621341   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.942028932s)
	I0311 21:34:07.621382   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0311 21:34:07.621385   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.941873405s)
	I0311 21:34:07.621413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621424   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (1.941989707s)
	I0311 21:34:07.621452   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621544   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.037072472s)
	I0311 21:34:07.621558   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0311 21:34:07.621580   70458 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:07.621627   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:09.726761   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:09.727207   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:09.727241   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:09.727153   71579 retry.go:31] will retry after 1.17092616s: waiting for machine to come up
	I0311 21:34:10.899311   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:10.899656   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:10.899675   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:10.899631   71579 retry.go:31] will retry after 1.870900402s: waiting for machine to come up
	I0311 21:34:12.771931   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:12.772421   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:12.772457   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:12.772375   71579 retry.go:31] will retry after 2.721804623s: waiting for machine to come up
	I0311 21:34:11.524646   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.902991705s)
	I0311 21:34:11.524683   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0311 21:34:11.524711   70458 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:11.524787   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:13.704750   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.179921724s)
	I0311 21:34:13.704786   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0311 21:34:13.704817   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:13.704868   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:15.496186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:15.496686   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:15.496722   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:15.496627   71579 retry.go:31] will retry after 2.568850361s: waiting for machine to come up
	I0311 21:34:18.068470   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:18.068926   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:18.068959   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:18.068872   71579 retry.go:31] will retry after 4.111366971s: waiting for machine to come up
	I0311 21:34:16.267427   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.562528088s)
	I0311 21:34:16.267458   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0311 21:34:16.267486   70458 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:16.267535   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:17.218029   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 21:34:17.218065   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:17.218104   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:18.987120   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.768996335s)
	I0311 21:34:18.987149   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0311 21:34:18.987167   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:18.987219   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:23.543571   70908 start.go:364] duration metric: took 4m22.394278247s to acquireMachinesLock for "old-k8s-version-239315"
	I0311 21:34:23.543649   70908 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:23.543661   70908 fix.go:54] fixHost starting: 
	I0311 21:34:23.544084   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:23.544139   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:23.561669   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0311 21:34:23.562158   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:23.562618   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:34:23.562645   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:23.562949   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:23.563114   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:23.563306   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetState
	I0311 21:34:23.565152   70908 fix.go:112] recreateIfNeeded on old-k8s-version-239315: state=Stopped err=<nil>
	I0311 21:34:23.565178   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	W0311 21:34:23.565351   70908 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:23.567943   70908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239315" ...
	I0311 21:34:22.182707   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183200   70604 main.go:141] libmachine: (embed-certs-743937) Found IP for machine: 192.168.50.114
	I0311 21:34:22.183228   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has current primary IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183238   70604 main.go:141] libmachine: (embed-certs-743937) Reserving static IP address...
	I0311 21:34:22.183694   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.183716   70604 main.go:141] libmachine: (embed-certs-743937) DBG | skip adding static IP to network mk-embed-certs-743937 - found existing host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"}
	I0311 21:34:22.183728   70604 main.go:141] libmachine: (embed-certs-743937) Reserved static IP address: 192.168.50.114
	I0311 21:34:22.183746   70604 main.go:141] libmachine: (embed-certs-743937) Waiting for SSH to be available...
	I0311 21:34:22.183760   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Getting to WaitForSSH function...
	I0311 21:34:22.185820   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186157   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.186193   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186285   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH client type: external
	I0311 21:34:22.186317   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa (-rw-------)
	I0311 21:34:22.186349   70604 main.go:141] libmachine: (embed-certs-743937) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:22.186368   70604 main.go:141] libmachine: (embed-certs-743937) DBG | About to run SSH command:
	I0311 21:34:22.186384   70604 main.go:141] libmachine: (embed-certs-743937) DBG | exit 0
	I0311 21:34:22.313253   70604 main.go:141] libmachine: (embed-certs-743937) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:22.313570   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetConfigRaw
	I0311 21:34:22.314271   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.317040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317404   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.317509   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317641   70604 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/config.json ...
	I0311 21:34:22.317814   70604 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:22.317830   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:22.318049   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.320550   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320833   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.320859   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320992   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.321223   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321405   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321547   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.321708   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.321930   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.321944   70604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:22.430028   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:22.430055   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430345   70604 buildroot.go:166] provisioning hostname "embed-certs-743937"
	I0311 21:34:22.430374   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430568   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.433555   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.433884   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.433907   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.434102   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.434311   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434474   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434611   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.434762   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.434936   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.434954   70604 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-743937 && echo "embed-certs-743937" | sudo tee /etc/hostname
	I0311 21:34:22.564819   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-743937
	
	I0311 21:34:22.564848   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.567667   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568075   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.568122   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568325   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.568519   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568719   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568913   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.569094   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.569335   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.569361   70604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-743937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-743937/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-743937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:22.684397   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:22.684425   70604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:22.684473   70604 buildroot.go:174] setting up certificates
	I0311 21:34:22.684490   70604 provision.go:84] configureAuth start
	I0311 21:34:22.684507   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.684840   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.687805   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688156   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.688178   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688401   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.690975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691302   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.691321   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691469   70604 provision.go:143] copyHostCerts
	I0311 21:34:22.691528   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:22.691540   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:22.691598   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:22.691690   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:22.691706   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:22.691729   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:22.691834   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:22.691850   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:22.691878   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:22.691946   70604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.embed-certs-743937 san=[127.0.0.1 192.168.50.114 embed-certs-743937 localhost minikube]
	I0311 21:34:22.838395   70604 provision.go:177] copyRemoteCerts
	I0311 21:34:22.838452   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:22.838478   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.840975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841308   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.841342   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841487   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.841684   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.841834   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.841968   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:22.924202   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:22.956079   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 21:34:22.982352   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 21:34:23.008286   70604 provision.go:87] duration metric: took 323.780619ms to configureAuth
	I0311 21:34:23.008311   70604 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:23.008481   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:34:23.008553   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.011128   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011439   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.011461   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011632   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.011780   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.011919   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.012094   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.012278   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.012436   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.012452   70604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:23.288122   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:23.288146   70604 machine.go:97] duration metric: took 970.321311ms to provisionDockerMachine
	I0311 21:34:23.288157   70604 start.go:293] postStartSetup for "embed-certs-743937" (driver="kvm2")
	I0311 21:34:23.288167   70604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:23.288180   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.288496   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:23.288532   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.291434   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.291823   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.291856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.292079   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.292297   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.292468   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.292629   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.376367   70604 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:23.381629   70604 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:23.381660   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:23.381754   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:23.381855   70604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:23.381967   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:23.392280   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:23.423241   70604 start.go:296] duration metric: took 135.071082ms for postStartSetup
	I0311 21:34:23.423283   70604 fix.go:56] duration metric: took 19.897275281s for fixHost
	I0311 21:34:23.423310   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.426264   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426623   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.426652   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426862   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.427052   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427256   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427419   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.427575   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.427809   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.427822   70604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:23.543425   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192863.499269756
	
	I0311 21:34:23.543447   70604 fix.go:216] guest clock: 1710192863.499269756
	I0311 21:34:23.543454   70604 fix.go:229] Guest: 2024-03-11 21:34:23.499269756 +0000 UTC Remote: 2024-03-11 21:34:23.423289031 +0000 UTC m=+304.494814333 (delta=75.980725ms)
	I0311 21:34:23.543472   70604 fix.go:200] guest clock delta is within tolerance: 75.980725ms
	I0311 21:34:23.543478   70604 start.go:83] releasing machines lock for "embed-certs-743937", held for 20.0175167s
	I0311 21:34:23.543504   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.543746   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:23.546763   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547188   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.547223   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547396   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.547882   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548077   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548163   70604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:23.548226   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.548282   70604 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:23.548309   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.551186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551485   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551609   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.551642   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551795   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.551979   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.552001   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.552035   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552146   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.552211   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552277   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552368   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.552501   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552666   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.660064   70604 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:23.668731   70604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:23.831784   70604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:23.840331   70604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:23.840396   70604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:23.864730   70604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:23.864766   70604 start.go:494] detecting cgroup driver to use...
	I0311 21:34:23.864831   70604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:23.886072   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:23.901660   70604 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:23.901727   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:23.917374   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:23.932525   70604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:24.066368   70604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:24.222425   70604 docker.go:233] disabling docker service ...
	I0311 21:34:24.222487   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:24.240937   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:24.257050   70604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:24.395003   70604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:24.550709   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:24.572524   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:24.599710   70604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:24.599776   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.612426   70604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:24.612514   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.626989   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.639576   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.653711   70604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:24.673581   70604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:24.684772   70604 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:24.684841   70604 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:24.707855   70604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:24.719801   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:24.904788   70604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:25.063437   70604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:25.063511   70604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:25.070294   70604 start.go:562] Will wait 60s for crictl version
	I0311 21:34:25.070352   70604 ssh_runner.go:195] Run: which crictl
	I0311 21:34:25.074945   70604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:25.121979   70604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:25.122070   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.159092   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.207391   70604 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:34:21.469205   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.481954559s)
	I0311 21:34:21.469242   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0311 21:34:21.469285   70458 cache_images.go:123] Successfully loaded all cached images
	I0311 21:34:21.469295   70458 cache_images.go:92] duration metric: took 16.40620232s to LoadCachedImages
	I0311 21:34:21.469306   70458 kubeadm.go:928] updating node { 192.168.39.36 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:34:21.469436   70458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-324578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:21.469513   70458 ssh_runner.go:195] Run: crio config
	I0311 21:34:21.531635   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:21.531659   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:21.531671   70458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:21.531690   70458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-324578 NodeName:no-preload-324578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:21.531820   70458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-324578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:21.531876   70458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:34:21.546000   70458 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:21.546060   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:21.558818   70458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0311 21:34:21.577685   70458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:34:21.595960   70458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0311 21:34:21.615003   70458 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:21.619290   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:21.633307   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:21.751586   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:21.771672   70458 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578 for IP: 192.168.39.36
	I0311 21:34:21.771698   70458 certs.go:194] generating shared ca certs ...
	I0311 21:34:21.771717   70458 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:21.771907   70458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:21.771975   70458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:21.771987   70458 certs.go:256] generating profile certs ...
	I0311 21:34:21.772093   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/client.key
	I0311 21:34:21.772190   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key.681a9200
	I0311 21:34:21.772244   70458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key
	I0311 21:34:21.772371   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:21.772421   70458 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:21.772435   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:21.772475   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:21.772509   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:21.772542   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:21.772606   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:21.773241   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:21.833566   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:21.868156   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:21.910118   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:21.952222   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:34:21.988148   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:22.018493   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:22.045225   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:22.071481   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:22.097525   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:22.123425   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:22.156613   70458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:22.174679   70458 ssh_runner.go:195] Run: openssl version
	I0311 21:34:22.181137   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:22.197490   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203508   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203556   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.210822   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:22.224269   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:22.237282   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242762   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242816   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.249334   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:22.261866   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:22.273674   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279004   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279055   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.285394   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:22.299493   70458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:22.304827   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:22.311349   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:22.318377   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:22.325621   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:22.332316   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:22.338893   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:22.345167   70458 kubeadm.go:391] StartCluster: {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:22.345246   70458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:22.345286   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.386703   70458 cri.go:89] found id: ""
	I0311 21:34:22.386785   70458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:22.398475   70458 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:22.398494   70458 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:22.398500   70458 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:22.398558   70458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:22.409434   70458 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:22.410675   70458 kubeconfig.go:125] found "no-preload-324578" server: "https://192.168.39.36:8443"
	I0311 21:34:22.412906   70458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:22.423677   70458 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.36
	I0311 21:34:22.423708   70458 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:22.423719   70458 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:22.423762   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.472548   70458 cri.go:89] found id: ""
	I0311 21:34:22.472615   70458 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:22.494701   70458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:22.506944   70458 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:22.506964   70458 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:22.507015   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:22.517468   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:22.517521   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:22.528281   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:22.538496   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:22.538533   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:22.553009   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.566120   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:22.566189   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.579239   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:22.590180   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:22.590227   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:22.602988   70458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:22.615631   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:22.730568   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.355205   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.588923   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.694870   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.796820   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:23.796918   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.297341   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.797197   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.840030   70458 api_server.go:72] duration metric: took 1.043209284s to wait for apiserver process to appear ...
	I0311 21:34:24.840062   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:24.840101   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:24.840560   70458 api_server.go:269] stopped: https://192.168.39.36:8443/healthz: Get "https://192.168.39.36:8443/healthz": dial tcp 192.168.39.36:8443: connect: connection refused
	I0311 21:34:25.341161   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:23.569356   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .Start
	I0311 21:34:23.569527   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring networks are active...
	I0311 21:34:23.570188   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network default is active
	I0311 21:34:23.570613   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network mk-old-k8s-version-239315 is active
	I0311 21:34:23.571070   70908 main.go:141] libmachine: (old-k8s-version-239315) Getting domain xml...
	I0311 21:34:23.571836   70908 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:34:24.895619   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting to get IP...
	I0311 21:34:24.896680   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:24.897160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:24.897218   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:24.897131   71714 retry.go:31] will retry after 268.563191ms: waiting for machine to come up
	I0311 21:34:25.167783   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.168312   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.168343   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.168268   71714 retry.go:31] will retry after 245.059124ms: waiting for machine to come up
	I0311 21:34:25.414644   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.415139   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.415168   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.415100   71714 retry.go:31] will retry after 407.807793ms: waiting for machine to come up
	I0311 21:34:25.824887   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.825351   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.825379   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.825274   71714 retry.go:31] will retry after 503.187834ms: waiting for machine to come up
	I0311 21:34:25.208819   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:25.211726   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212203   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:25.212244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212486   70604 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:25.217365   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:25.233670   70604 kubeadm.go:877] updating cluster {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:25.233825   70604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:34:25.233886   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:25.282028   70604 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:34:25.282108   70604 ssh_runner.go:195] Run: which lz4
	I0311 21:34:25.287047   70604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:25.291721   70604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:25.291751   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:34:27.414481   70604 crio.go:444] duration metric: took 2.127464595s to copy over tarball
	I0311 21:34:27.414554   70604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:28.225996   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.226031   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.226048   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.285274   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.285307   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.340493   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.512353   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.512409   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:28.840800   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.852523   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.852560   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.341135   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.354997   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:29.355028   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.840769   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.848023   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:34:29.856262   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:34:29.856290   70458 api_server.go:131] duration metric: took 5.016219789s to wait for apiserver health ...
	I0311 21:34:29.856300   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:29.856308   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:29.858297   70458 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:29.859734   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:29.891375   70458 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:29.932393   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:29.959208   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:29.959257   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:29.959268   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:29.959278   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:29.959290   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:29.959296   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:34:29.959303   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:29.959319   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:29.959335   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:34:29.959343   70458 system_pods.go:74] duration metric: took 26.926978ms to wait for pod list to return data ...
	I0311 21:34:29.959355   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:29.963151   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:29.963179   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:29.963193   70458 node_conditions.go:105] duration metric: took 3.825246ms to run NodePressure ...
	I0311 21:34:29.963209   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:26.330005   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:26.330547   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:26.330569   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:26.330464   71714 retry.go:31] will retry after 723.914956ms: waiting for machine to come up
	I0311 21:34:27.056271   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.056879   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.056901   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.056834   71714 retry.go:31] will retry after 693.583075ms: waiting for machine to come up
	I0311 21:34:27.752514   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.752958   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.752980   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.752916   71714 retry.go:31] will retry after 902.247864ms: waiting for machine to come up
	I0311 21:34:28.657551   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:28.658023   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:28.658079   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:28.658008   71714 retry.go:31] will retry after 1.140425887s: waiting for machine to come up
	I0311 21:34:29.800305   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:29.800824   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:29.800852   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:29.800774   71714 retry.go:31] will retry after 1.68593342s: waiting for machine to come up
	I0311 21:34:32.367999   70458 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.404768175s)
	I0311 21:34:32.368034   70458 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375444   70458 kubeadm.go:733] kubelet initialised
	I0311 21:34:32.375468   70458 kubeadm.go:734] duration metric: took 7.423643ms waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375477   70458 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:32.383579   70458 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.389728   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389755   70458 pod_ready.go:81] duration metric: took 6.144226ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.389766   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389775   70458 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.398797   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398822   70458 pod_ready.go:81] duration metric: took 9.033188ms for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.398833   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398841   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.407870   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407905   70458 pod_ready.go:81] duration metric: took 9.056349ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.407915   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407928   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.414434   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414455   70458 pod_ready.go:81] duration metric: took 6.519611ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.414463   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414468   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.771994   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772025   70458 pod_ready.go:81] duration metric: took 357.549783ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.772034   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772041   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.175562   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175595   70458 pod_ready.go:81] duration metric: took 403.546508ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.175608   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175617   70458 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.573749   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573777   70458 pod_ready.go:81] duration metric: took 398.141162ms for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.573789   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573799   70458 pod_ready.go:38] duration metric: took 1.198311127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:33.573862   70458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:34:33.592112   70458 ops.go:34] apiserver oom_adj: -16
	I0311 21:34:33.592148   70458 kubeadm.go:591] duration metric: took 11.193640837s to restartPrimaryControlPlane
	I0311 21:34:33.592161   70458 kubeadm.go:393] duration metric: took 11.247001751s to StartCluster
	I0311 21:34:33.592181   70458 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.592269   70458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:33.594144   70458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.594461   70458 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:34:33.596303   70458 out.go:177] * Verifying Kubernetes components...
	I0311 21:34:33.594553   70458 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:34:33.594702   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:33.597724   70458 addons.go:69] Setting default-storageclass=true in profile "no-preload-324578"
	I0311 21:34:33.597727   70458 addons.go:69] Setting storage-provisioner=true in profile "no-preload-324578"
	I0311 21:34:33.597739   70458 addons.go:69] Setting metrics-server=true in profile "no-preload-324578"
	I0311 21:34:33.597759   70458 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-324578"
	I0311 21:34:33.597771   70458 addons.go:234] Setting addon storage-provisioner=true in "no-preload-324578"
	I0311 21:34:33.597772   70458 addons.go:234] Setting addon metrics-server=true in "no-preload-324578"
	W0311 21:34:33.597780   70458 addons.go:243] addon storage-provisioner should already be in state true
	W0311 21:34:33.597795   70458 addons.go:243] addon metrics-server should already be in state true
	I0311 21:34:33.597828   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597838   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597733   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:33.598079   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598110   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598224   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598260   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598305   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598269   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.613473   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0311 21:34:33.613994   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.614558   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.614576   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.614946   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.615385   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.615415   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.618026   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0311 21:34:33.618201   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0311 21:34:33.618370   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618497   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618818   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618833   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.618978   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618989   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.619157   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619343   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.619389   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619926   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.619956   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.623211   70458 addons.go:234] Setting addon default-storageclass=true in "no-preload-324578"
	W0311 21:34:33.623232   70458 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:34:33.623260   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.623634   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.623660   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.635263   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I0311 21:34:33.635575   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.636071   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.636080   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.636462   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.636606   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.638520   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.640583   70458 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:34:33.642029   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:34:33.642045   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:34:33.642058   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.640562   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0311 21:34:33.641020   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0311 21:34:33.642572   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.643082   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.643107   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.643432   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.644002   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.644030   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.644213   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.644711   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.644733   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.645120   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.645334   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.645406   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.645861   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.645888   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.646042   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.646332   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.646548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.646719   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.646986   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.648681   70458 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:30.659466   70604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.244884989s)
	I0311 21:34:30.659492   70604 crio.go:451] duration metric: took 3.244983149s to extract the tarball
	I0311 21:34:30.659500   70604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:30.708661   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:30.769502   70604 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:34:30.769530   70604 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:34:30.769540   70604 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.28.4 crio true true} ...
	I0311 21:34:30.769675   70604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-743937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:30.769757   70604 ssh_runner.go:195] Run: crio config
	I0311 21:34:30.820223   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:30.820251   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:30.820267   70604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:30.820296   70604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-743937 NodeName:embed-certs-743937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:30.820475   70604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-743937"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:30.820563   70604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:34:30.833086   70604 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:30.833175   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:30.844335   70604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0311 21:34:30.863586   70604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:30.883598   70604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0311 21:34:30.904711   70604 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:30.909433   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:30.924054   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:31.064573   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:31.096931   70604 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937 for IP: 192.168.50.114
	I0311 21:34:31.096960   70604 certs.go:194] generating shared ca certs ...
	I0311 21:34:31.096980   70604 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:31.097157   70604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:31.097220   70604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:31.097236   70604 certs.go:256] generating profile certs ...
	I0311 21:34:31.097368   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/client.key
	I0311 21:34:31.097453   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key.c230aed9
	I0311 21:34:31.097520   70604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key
	I0311 21:34:31.097660   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:31.097709   70604 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:31.097770   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:31.097826   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:31.097867   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:31.097899   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:31.097958   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:31.098771   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:31.135109   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:31.173483   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:31.215059   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:31.253244   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0311 21:34:31.305450   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:31.340238   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:31.366993   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:31.393936   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:31.420998   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:31.446500   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:31.474047   70604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:31.493935   70604 ssh_runner.go:195] Run: openssl version
	I0311 21:34:31.500607   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:31.513874   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519255   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519303   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.525967   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:31.538995   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:31.551625   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557235   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557292   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.563658   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:31.576689   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:31.589299   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594405   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594453   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.601041   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:31.619307   70604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:31.624565   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:31.632121   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:31.638843   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:31.646400   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:31.652701   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:31.659661   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:31.666390   70604 kubeadm.go:391] StartCluster: {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:31.666496   70604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:31.666546   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.716714   70604 cri.go:89] found id: ""
	I0311 21:34:31.716796   70604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:31.733945   70604 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:31.733967   70604 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:31.733974   70604 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:31.734019   70604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:31.746543   70604 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:31.747720   70604 kubeconfig.go:125] found "embed-certs-743937" server: "https://192.168.50.114:8443"
	I0311 21:34:31.749670   70604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:31.762374   70604 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0311 21:34:31.762401   70604 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:31.762410   70604 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:31.762462   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.811965   70604 cri.go:89] found id: ""
	I0311 21:34:31.812055   70604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:31.836539   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:31.849272   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:31.849295   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:31.849348   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:31.861345   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:31.861423   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:31.875436   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:31.887183   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:31.887251   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:31.900032   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.911614   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:31.911690   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.924791   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:31.937131   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:31.937204   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:31.949123   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:31.960234   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.089622   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.806370   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.033263   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.135981   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.248827   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:33.248917   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.749207   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.650190   70458 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:33.650207   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:34:33.650223   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.653451   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.653895   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.653920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.654131   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.654302   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.654472   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.654631   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.689121   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0311 21:34:33.689487   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.693084   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.693105   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.693596   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.693796   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.696074   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.696629   70458 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:33.696644   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:34:33.696662   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.699920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700323   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.700342   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700564   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.700756   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.700859   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.700932   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.896331   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:33.969322   70458 node_ready.go:35] waiting up to 6m0s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:34.037114   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:34.059051   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:34:34.059080   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:34:34.094822   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:34.142231   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:34:34.142259   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:34:34.218979   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:34.219002   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:34:34.260381   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:35.648210   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.61103949s)
	I0311 21:34:35.648241   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.553388189s)
	I0311 21:34:35.648344   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648381   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648367   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648409   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648658   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.648675   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.648685   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648694   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648754   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.648997   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.649019   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650050   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650068   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650091   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.650101   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.650367   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650384   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.658738   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.658764   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.658991   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.659007   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687393   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.426969773s)
	I0311 21:34:35.687453   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687467   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687771   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.687810   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687828   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687848   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687831   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688142   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688164   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.688178   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.688214   70458 addons.go:470] Verifying addon metrics-server=true in "no-preload-324578"
	I0311 21:34:35.690413   70458 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:34:31.488010   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:31.488449   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:31.488471   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:31.488421   71714 retry.go:31] will retry after 2.325869089s: waiting for machine to come up
	I0311 21:34:33.815568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:33.816215   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:33.816236   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:33.816176   71714 retry.go:31] will retry after 2.457084002s: waiting for machine to come up
	I0311 21:34:34.249462   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.749177   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.778830   70604 api_server.go:72] duration metric: took 1.530004395s to wait for apiserver process to appear ...
	I0311 21:34:34.778858   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:34.778879   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:34.779469   70604 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0311 21:34:35.279027   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.110193   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.110221   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.110234   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.159861   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.159909   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.279045   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.289460   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.289491   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:38.779423   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.785174   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.785206   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.278910   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.290017   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:39.290054   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.779616   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.786362   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:34:39.794557   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:34:39.794583   70604 api_server.go:131] duration metric: took 5.01571788s to wait for apiserver health ...
	I0311 21:34:39.794594   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:39.794601   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:39.796063   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:35.691844   70458 addons.go:505] duration metric: took 2.097304232s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:34:35.974533   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:37.983073   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:38.977713   70458 node_ready.go:49] node "no-preload-324578" has status "Ready":"True"
	I0311 21:34:38.977738   70458 node_ready.go:38] duration metric: took 5.008382488s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:38.977749   70458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:38.986414   70458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993430   70458 pod_ready.go:92] pod "coredns-76f75df574-s6lsb" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:38.993454   70458 pod_ready.go:81] duration metric: took 7.012539ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993465   70458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:36.274640   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:36.275119   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:36.275157   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:36.275064   71714 retry.go:31] will retry after 3.618026102s: waiting for machine to come up
	I0311 21:34:39.894877   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:39.895397   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:39.895447   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:39.895343   71714 retry.go:31] will retry after 3.826847061s: waiting for machine to come up
	I0311 21:34:39.797420   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:39.810877   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:39.836773   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:39.852496   70604 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:39.852541   70604 system_pods.go:61] "coredns-5dd5756b68-czng9" [a57d0643-36c5-44e2-a113-de051d0e0408] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:39.852556   70604 system_pods.go:61] "etcd-embed-certs-743937" [9f0051e8-247f-4968-a834-c38c5f0c4407] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:39.852567   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [4ac979a6-1906-4a58-9d41-9587d66d81ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:39.852578   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [263ba100-e911-4857-a973-c4dc9312a653] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:39.852591   70604 system_pods.go:61] "kube-proxy-n2qzt" [21f56cfb-a3f5-4c4b-993d-53b6d8f60ec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:34:39.852600   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [0121fa4d-91a8-432b-9f21-c6e8c0b33872] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:39.852606   70604 system_pods.go:61] "metrics-server-57f55c9bc5-7qw98" [3d3f2e87-2e36-4ca3-b31c-fc5f38251f03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:39.852617   70604 system_pods.go:61] "storage-provisioner" [72fd13c7-1a79-4e8a-bdc2-f45117599d85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:34:39.852624   70604 system_pods.go:74] duration metric: took 15.823708ms to wait for pod list to return data ...
	I0311 21:34:39.852634   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:39.856288   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:39.856309   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:39.856317   70604 node_conditions.go:105] duration metric: took 3.676347ms to run NodePressure ...
	I0311 21:34:39.856331   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:40.103882   70604 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108726   70604 kubeadm.go:733] kubelet initialised
	I0311 21:34:40.108758   70604 kubeadm.go:734] duration metric: took 4.847245ms waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108768   70604 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:40.115566   70604 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:42.124435   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.026187   70417 start.go:364] duration metric: took 58.09976601s to acquireMachinesLock for "default-k8s-diff-port-766430"
	I0311 21:34:45.026231   70417 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:45.026242   70417 fix.go:54] fixHost starting: 
	I0311 21:34:45.026632   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:45.026661   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:45.046341   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0311 21:34:45.046779   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:45.047336   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:34:45.047375   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:45.047741   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:45.047920   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:34:45.048090   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:34:45.049581   70417 fix.go:112] recreateIfNeeded on default-k8s-diff-port-766430: state=Stopped err=<nil>
	I0311 21:34:45.049605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	W0311 21:34:45.049759   70417 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:45.051505   70417 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-766430" ...
	I0311 21:34:41.001474   70458 pod_ready.go:102] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:43.500991   70458 pod_ready.go:92] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.501018   70458 pod_ready.go:81] duration metric: took 4.507545237s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.501030   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506732   70458 pod_ready.go:92] pod "kube-apiserver-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.506753   70458 pod_ready.go:81] duration metric: took 5.714866ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506764   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511432   70458 pod_ready.go:92] pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.511456   70458 pod_ready.go:81] duration metric: took 4.684021ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511469   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516333   70458 pod_ready.go:92] pod "kube-proxy-rmz4b" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.516360   70458 pod_ready.go:81] duration metric: took 4.882955ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516370   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521501   70458 pod_ready.go:92] pod "kube-scheduler-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.521524   70458 pod_ready.go:81] duration metric: took 5.146945ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521532   70458 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.723851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724335   70908 main.go:141] libmachine: (old-k8s-version-239315) Found IP for machine: 192.168.72.52
	I0311 21:34:43.724367   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserving static IP address...
	I0311 21:34:43.724382   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has current primary IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724722   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.724759   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserved static IP address: 192.168.72.52
	I0311 21:34:43.724774   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | skip adding static IP to network mk-old-k8s-version-239315 - found existing host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"}
	I0311 21:34:43.724797   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Getting to WaitForSSH function...
	I0311 21:34:43.724815   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting for SSH to be available...
	I0311 21:34:43.727015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727330   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.727354   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727541   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH client type: external
	I0311 21:34:43.727568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa (-rw-------)
	I0311 21:34:43.727624   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:43.727641   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | About to run SSH command:
	I0311 21:34:43.727651   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | exit 0
	I0311 21:34:43.848884   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:43.849287   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:34:43.850084   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:43.852942   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853529   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.853572   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853801   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:34:43.854001   70908 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:43.854024   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:43.854255   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.856623   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857153   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.857187   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857321   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.857516   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857702   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857897   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.858105   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.858332   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.858349   70908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:43.961617   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:43.961664   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.961921   70908 buildroot.go:166] provisioning hostname "old-k8s-version-239315"
	I0311 21:34:43.961945   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.962134   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.964672   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.964987   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.965015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.965122   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.965305   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965466   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965591   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.965801   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.966042   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.966055   70908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239315 && echo "old-k8s-version-239315" | sudo tee /etc/hostname
	I0311 21:34:44.088097   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239315
	
	I0311 21:34:44.088126   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.090911   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091167   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.091205   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091347   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.091524   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091680   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091818   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.091984   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.092185   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.092205   70908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239315/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:44.207643   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:44.207674   70908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:44.207693   70908 buildroot.go:174] setting up certificates
	I0311 21:34:44.207701   70908 provision.go:84] configureAuth start
	I0311 21:34:44.207710   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:44.207975   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:44.211160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211556   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.211588   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211754   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.214211   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214553   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.214585   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214732   70908 provision.go:143] copyHostCerts
	I0311 21:34:44.214797   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:44.214813   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:44.214886   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:44.214991   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:44.215005   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:44.215035   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:44.215160   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:44.215171   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:44.215198   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:44.215267   70908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239315 san=[127.0.0.1 192.168.72.52 localhost minikube old-k8s-version-239315]
	I0311 21:34:44.305250   70908 provision.go:177] copyRemoteCerts
	I0311 21:34:44.305329   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:44.305367   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.308244   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308636   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.308673   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308874   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.309092   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.309290   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.309446   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.394958   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:44.423314   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 21:34:44.459338   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:44.491201   70908 provision.go:87] duration metric: took 283.487383ms to configureAuth
	I0311 21:34:44.491232   70908 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:44.491419   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:34:44.491484   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.494039   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494476   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.494509   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494638   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.494830   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.494998   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.495175   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.495366   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.495548   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.495570   70908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:44.787935   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:44.787961   70908 machine.go:97] duration metric: took 933.945971ms to provisionDockerMachine
	I0311 21:34:44.787971   70908 start.go:293] postStartSetup for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:34:44.787983   70908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:44.788007   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:44.788327   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:44.788355   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.791133   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791460   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.791492   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791637   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.791858   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.792021   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.792165   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.877163   70908 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:44.882141   70908 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:44.882164   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:44.882241   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:44.882330   70908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:44.882442   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:44.894699   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:44.919809   70908 start.go:296] duration metric: took 131.8264ms for postStartSetup
	I0311 21:34:44.919848   70908 fix.go:56] duration metric: took 21.376188092s for fixHost
	I0311 21:34:44.919867   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.922414   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922708   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.922738   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922876   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.923075   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923274   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923455   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.923618   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.923806   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.923831   70908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:45.026068   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192885.004450463
	
	I0311 21:34:45.026088   70908 fix.go:216] guest clock: 1710192885.004450463
	I0311 21:34:45.026096   70908 fix.go:229] Guest: 2024-03-11 21:34:45.004450463 +0000 UTC Remote: 2024-03-11 21:34:44.919851167 +0000 UTC m=+283.922086595 (delta=84.599296ms)
	I0311 21:34:45.026118   70908 fix.go:200] guest clock delta is within tolerance: 84.599296ms
	I0311 21:34:45.026124   70908 start.go:83] releasing machines lock for "old-k8s-version-239315", held for 21.482500591s
	I0311 21:34:45.026158   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.026440   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:45.029366   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029778   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.029813   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029992   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030514   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030711   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030800   70908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:45.030846   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.030946   70908 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:45.030971   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.033851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.033989   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034264   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034292   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034324   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034348   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034429   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034618   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034633   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034799   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034814   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034979   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034977   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.035143   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.135748   70908 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:45.142408   70908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:45.297445   70908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:45.304482   70908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:45.304552   70908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:45.322754   70908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:45.322775   70908 start.go:494] detecting cgroup driver to use...
	I0311 21:34:45.322832   70908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:45.345988   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:45.363267   70908 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:45.363320   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:45.380892   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:45.396972   70908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:45.531640   70908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:45.700243   70908 docker.go:233] disabling docker service ...
	I0311 21:34:45.700306   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:45.730542   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:45.749068   70908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:45.903721   70908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:46.045122   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:46.065278   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:46.090726   70908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:34:46.090779   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.105783   70908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:46.105841   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.121702   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.136262   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.150628   70908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:46.163771   70908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:46.175613   70908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:46.175675   70908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:46.193848   70908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:46.205694   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:46.344832   70908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:46.501773   70908 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:46.501851   70908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:46.507932   70908 start.go:562] Will wait 60s for crictl version
	I0311 21:34:46.507988   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:46.512337   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:46.555165   70908 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:46.555249   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.588554   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.623785   70908 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:34:44.627149   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.128405   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.052882   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Start
	I0311 21:34:45.053039   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring networks are active...
	I0311 21:34:45.053710   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network default is active
	I0311 21:34:45.054156   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network mk-default-k8s-diff-port-766430 is active
	I0311 21:34:45.054499   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Getting domain xml...
	I0311 21:34:45.055347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Creating domain...
	I0311 21:34:46.378216   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting to get IP...
	I0311 21:34:46.379054   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379376   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379485   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.379392   71893 retry.go:31] will retry after 242.915621ms: waiting for machine to come up
	I0311 21:34:46.623729   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624348   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624375   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.624304   71893 retry.go:31] will retry after 274.237436ms: waiting for machine to come up
	I0311 21:34:46.899864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.900296   71893 retry.go:31] will retry after 333.693752ms: waiting for machine to come up
	I0311 21:34:47.235751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236278   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236309   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.236220   71893 retry.go:31] will retry after 513.728994ms: waiting for machine to come up
	I0311 21:34:47.752081   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752622   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.752553   71893 retry.go:31] will retry after 575.202217ms: waiting for machine to come up
	I0311 21:34:48.329095   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329524   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329557   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:48.329477   71893 retry.go:31] will retry after 741.05703ms: waiting for machine to come up
	I0311 21:34:49.072641   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073163   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073195   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.073101   71893 retry.go:31] will retry after 802.911807ms: waiting for machine to come up
	I0311 21:34:45.528876   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.530391   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:49.530451   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:46.625154   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:46.627732   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628080   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:46.628102   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628304   70908 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:46.633367   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:46.649537   70908 kubeadm.go:877] updating cluster {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:46.649677   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:34:46.649733   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:46.699194   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:46.699264   70908 ssh_runner.go:195] Run: which lz4
	I0311 21:34:46.703944   70908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:46.709224   70908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:46.709258   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:34:48.747926   70908 crio.go:444] duration metric: took 2.044006932s to copy over tarball
	I0311 21:34:48.747994   70908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:49.629334   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:51.122454   70604 pod_ready.go:92] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:51.122481   70604 pod_ready.go:81] duration metric: took 11.006878828s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:51.122494   70604 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.227971   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.228001   70604 pod_ready.go:81] duration metric: took 1.105498501s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.228014   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234804   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.234834   70604 pod_ready.go:81] duration metric: took 6.811865ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234854   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241448   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.241473   70604 pod_ready.go:81] duration metric: took 6.611927ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241486   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249614   70604 pod_ready.go:92] pod "kube-proxy-n2qzt" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.249648   70604 pod_ready.go:81] duration metric: took 8.154372ms for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249661   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139924   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:53.139951   70604 pod_ready.go:81] duration metric: took 890.27792ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139961   70604 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:49.877965   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878438   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878460   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.878397   71893 retry.go:31] will retry after 1.163030899s: waiting for machine to come up
	I0311 21:34:51.042660   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043181   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043210   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:51.043131   71893 retry.go:31] will retry after 1.225509553s: waiting for machine to come up
	I0311 21:34:52.269779   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270321   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270358   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:52.270250   71893 retry.go:31] will retry after 2.091046831s: waiting for machine to come up
	I0311 21:34:54.363231   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363664   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363693   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:54.363618   71893 retry.go:31] will retry after 1.759309864s: waiting for machine to come up
	I0311 21:34:52.031032   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:54.529537   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:52.300295   70908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55227284s)
	I0311 21:34:52.300322   70908 crio.go:451] duration metric: took 3.552370125s to extract the tarball
	I0311 21:34:52.300331   70908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:52.349405   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:52.395791   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:52.395821   70908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:52.395892   70908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.395955   70908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.396002   70908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.396010   70908 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.395959   70908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.395932   70908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.395921   70908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:34:52.395974   70908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397721   70908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.397760   70908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.397767   70908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.397768   70908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.397762   70908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397804   70908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.398008   70908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:34:52.398129   70908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.548255   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.549300   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.560293   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.564094   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.564433   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.569516   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.578251   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:34:52.674385   70908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:34:52.674427   70908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.674475   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.725602   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.741797   70908 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:34:52.741840   70908 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.741882   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.793195   70908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:34:52.793239   70908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.793278   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798118   70908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:34:52.798174   70908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.798220   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798241   70908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:34:52.798277   70908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.798312   70908 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:34:52.798333   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798285   70908 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:34:52.798378   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.798399   70908 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.798434   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798336   70908 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:34:52.798510   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.957658   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.957712   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.957765   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.957816   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.957846   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:34:52.957904   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:34:52.957925   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:53.106649   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:34:53.106699   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:34:53.106913   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:34:53.107837   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:34:53.116024   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:34:53.122060   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:34:53.122118   70908 cache_images.go:92] duration metric: took 726.282306ms to LoadCachedImages
	W0311 21:34:53.122205   70908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0311 21:34:53.122224   70908 kubeadm.go:928] updating node { 192.168.72.52 8443 v1.20.0 crio true true} ...
	I0311 21:34:53.122341   70908 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239315 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:53.122443   70908 ssh_runner.go:195] Run: crio config
	I0311 21:34:53.192161   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:34:53.192191   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:53.192211   70908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:53.192233   70908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239315 NodeName:old-k8s-version-239315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:34:53.192405   70908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239315"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:53.192476   70908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:34:53.203965   70908 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:53.204019   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:53.215221   70908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0311 21:34:53.235943   70908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:53.255383   70908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0311 21:34:53.276634   70908 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:53.281778   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:53.298479   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:53.450052   70908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:53.472459   70908 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315 for IP: 192.168.72.52
	I0311 21:34:53.472480   70908 certs.go:194] generating shared ca certs ...
	I0311 21:34:53.472524   70908 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:53.472676   70908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:53.472728   70908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:53.472771   70908 certs.go:256] generating profile certs ...
	I0311 21:34:53.472883   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key
	I0311 21:34:53.472954   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1
	I0311 21:34:53.473013   70908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key
	I0311 21:34:53.473143   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:53.473185   70908 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:53.473198   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:53.473237   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:53.473272   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:53.473307   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:53.473363   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:53.473988   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:53.527429   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:53.575908   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:53.622438   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:53.665366   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 21:34:53.702121   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0311 21:34:53.746066   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:53.779151   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:53.813286   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:53.847058   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:53.882261   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:53.912444   70908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:53.932592   70908 ssh_runner.go:195] Run: openssl version
	I0311 21:34:53.939200   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:53.955630   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960866   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960920   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.967258   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:53.981075   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:53.995065   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000196   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000272   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.008574   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:54.022782   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:54.037409   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042893   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042965   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.049497   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:54.062597   70908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:54.067971   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:54.074746   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:54.081323   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:54.088762   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:54.095529   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:54.102396   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:54.109553   70908 kubeadm.go:391] StartCluster: {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:54.109639   70908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:54.109689   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.152063   70908 cri.go:89] found id: ""
	I0311 21:34:54.152143   70908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:54.163988   70908 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:54.164005   70908 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:54.164011   70908 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:54.164050   70908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:54.175616   70908 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:54.176779   70908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:54.177542   70908 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-11004/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239315" cluster setting kubeconfig missing "old-k8s-version-239315" context setting]
	I0311 21:34:54.178649   70908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:54.180405   70908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:54.191864   70908 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.52
	I0311 21:34:54.191891   70908 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:54.191903   70908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:54.191948   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.233779   70908 cri.go:89] found id: ""
	I0311 21:34:54.233852   70908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:54.253672   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:54.266010   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:54.266038   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:54.266085   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:54.277867   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:54.277918   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:54.288984   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:54.300133   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:54.300197   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:54.312090   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.323997   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:54.324059   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.337225   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:54.348223   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:54.348266   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:54.359245   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:54.370003   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:54.525972   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.408437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.676995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.819933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.913736   70908 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:55.913811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:55.147500   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:57.148276   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.124678   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125150   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125183   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:56.125101   71893 retry.go:31] will retry after 2.284226205s: waiting for machine to come up
	I0311 21:34:58.412391   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.412973   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.413002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:58.412923   71893 retry.go:31] will retry after 4.532871869s: waiting for machine to come up
	I0311 21:34:57.031683   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:59.032261   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:56.914753   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.413928   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.914123   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.413931   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.914199   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.414205   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.913880   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.914121   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.148774   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.646997   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:03.647990   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:02.948316   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948762   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948790   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:35:02.948704   71893 retry.go:31] will retry after 4.885152649s: waiting for machine to come up
	I0311 21:35:01.529589   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:04.028860   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.414003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:01.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.913977   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.414740   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.914735   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.414726   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.914846   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.914715   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.648516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:08.147744   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:07.835002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.835551   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Found IP for machine: 192.168.61.11
	I0311 21:35:07.835585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserving static IP address...
	I0311 21:35:07.835601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has current primary IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.836026   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.836055   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | skip adding static IP to network mk-default-k8s-diff-port-766430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"}
	I0311 21:35:07.836075   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserved static IP address: 192.168.61.11
	I0311 21:35:07.836110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Getting to WaitForSSH function...
	I0311 21:35:07.836125   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for SSH to be available...
	I0311 21:35:07.838230   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.838631   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838757   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH client type: external
	I0311 21:35:07.838784   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa (-rw-------)
	I0311 21:35:07.838830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:35:07.838871   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | About to run SSH command:
	I0311 21:35:07.838897   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | exit 0
	I0311 21:35:07.968765   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | SSH cmd err, output: <nil>: 
	I0311 21:35:07.969119   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetConfigRaw
	I0311 21:35:07.969756   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:07.972490   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.972921   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.972949   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.973180   70417 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/config.json ...
	I0311 21:35:07.973362   70417 machine.go:94] provisionDockerMachine start ...
	I0311 21:35:07.973381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:07.973582   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:07.975926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.976277   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976419   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:07.976566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976704   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976847   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:07.976991   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:07.977161   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:07.977171   70417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:35:08.093841   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:35:08.093864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094076   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:35:08.094100   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094329   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.097134   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097498   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.097528   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097670   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.097854   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098021   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.098409   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.098642   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.098657   70417 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766430 && echo "default-k8s-diff-port-766430" | sudo tee /etc/hostname
	I0311 21:35:08.233860   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766430
	
	I0311 21:35:08.233890   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.236977   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237387   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.237408   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237596   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.237791   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.237962   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.238194   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.238359   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.238515   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.238532   70417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:35:08.363393   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:35:08.363419   70417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:35:08.363471   70417 buildroot.go:174] setting up certificates
	I0311 21:35:08.363484   70417 provision.go:84] configureAuth start
	I0311 21:35:08.363497   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.363780   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:08.366605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.366990   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.367012   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.367139   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.369314   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369650   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.369676   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369798   70417 provision.go:143] copyHostCerts
	I0311 21:35:08.369853   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:35:08.369863   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:35:08.369915   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:35:08.370005   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:35:08.370013   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:35:08.370032   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:35:08.370091   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:35:08.370098   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:35:08.370114   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:35:08.370169   70417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766430 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-766430 localhost minikube]
	I0311 21:35:08.542469   70417 provision.go:177] copyRemoteCerts
	I0311 21:35:08.542529   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:35:08.542550   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.545388   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545750   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.545782   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545958   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.546115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.546264   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.546360   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:08.635866   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:35:08.667490   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0311 21:35:08.697944   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:35:08.726836   70417 provision.go:87] duration metric: took 363.34159ms to configureAuth
	I0311 21:35:08.726860   70417 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:35:08.727033   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:35:08.727115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.730050   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730458   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.730489   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730788   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.730987   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731170   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731317   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.731466   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.731607   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.731629   70417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:35:09.035100   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:35:09.035129   70417 machine.go:97] duration metric: took 1.061753229s to provisionDockerMachine
	I0311 21:35:09.035142   70417 start.go:293] postStartSetup for "default-k8s-diff-port-766430" (driver="kvm2")
	I0311 21:35:09.035151   70417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:35:09.035165   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.035458   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:35:09.035484   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.038340   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038638   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.038668   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038829   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.039027   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.039178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.039343   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.133013   70417 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:35:09.138043   70417 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:35:09.138065   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:35:09.138166   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:35:09.138259   70417 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:35:09.138364   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:35:09.149527   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:09.176424   70417 start.go:296] duration metric: took 141.271199ms for postStartSetup
	I0311 21:35:09.176460   70417 fix.go:56] duration metric: took 24.15021813s for fixHost
	I0311 21:35:09.176479   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.179447   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.179830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.179859   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.180147   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.180402   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180758   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.180974   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:09.181186   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:09.181200   70417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:35:09.297740   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192909.282566583
	
	I0311 21:35:09.297764   70417 fix.go:216] guest clock: 1710192909.282566583
	I0311 21:35:09.297773   70417 fix.go:229] Guest: 2024-03-11 21:35:09.282566583 +0000 UTC Remote: 2024-03-11 21:35:09.176465496 +0000 UTC m=+364.839103648 (delta=106.101087ms)
	I0311 21:35:09.297795   70417 fix.go:200] guest clock delta is within tolerance: 106.101087ms
	I0311 21:35:09.297802   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 24.271590337s
	I0311 21:35:09.297825   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.298067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:09.300989   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301399   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.301422   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301604   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302091   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302291   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302385   70417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:35:09.302433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.302490   70417 ssh_runner.go:195] Run: cat /version.json
	I0311 21:35:09.302515   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.305403   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305572   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305802   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.305831   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305912   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306042   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.306223   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306430   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.306511   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306645   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306772   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:06.528726   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.029055   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.419852   70417 ssh_runner.go:195] Run: systemctl --version
	I0311 21:35:09.427141   70417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:35:09.579321   70417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:35:09.586396   70417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:35:09.586470   70417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:35:09.606617   70417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:35:09.606639   70417 start.go:494] detecting cgroup driver to use...
	I0311 21:35:09.606705   70417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:35:09.627066   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:35:09.646091   70417 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:35:09.646151   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:35:09.662307   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:35:09.679793   70417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:35:09.828827   70417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:35:09.984773   70417 docker.go:233] disabling docker service ...
	I0311 21:35:09.984843   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:35:10.003968   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:35:10.018609   70417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:35:10.174297   70417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:35:10.316762   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:35:10.338008   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:35:10.359320   70417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:35:10.359374   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.371953   70417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:35:10.372008   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.384823   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.397305   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.409521   70417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:35:10.424714   70417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:35:10.438470   70417 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:35:10.438529   70417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:35:10.454436   70417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:35:10.465004   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:10.611379   70417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:35:10.786860   70417 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:35:10.786959   70417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:35:10.792496   70417 start.go:562] Will wait 60s for crictl version
	I0311 21:35:10.792551   70417 ssh_runner.go:195] Run: which crictl
	I0311 21:35:10.797079   70417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:35:10.837010   70417 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:35:10.837086   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.868308   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.900087   70417 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:35:06.414389   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:06.914233   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.414565   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.914773   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.414348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.914743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.413987   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.914698   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.150688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:12.648444   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:10.901304   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:10.904103   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904380   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:10.904407   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904557   70417 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0311 21:35:10.909585   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:10.924163   70417 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:35:10.924311   70417 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:35:10.924408   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:10.969555   70417 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:35:10.969623   70417 ssh_runner.go:195] Run: which lz4
	I0311 21:35:10.974054   70417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:35:10.978776   70417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:35:10.978811   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:35:12.893346   70417 crio.go:444] duration metric: took 1.91931676s to copy over tarball
	I0311 21:35:12.893421   70417 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:35:11.031301   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:13.527896   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:11.414320   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:11.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.414529   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.914476   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.414282   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.914426   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.414521   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.914001   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.414839   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.913921   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.648625   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.148688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:15.772070   70417 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.878627154s)
	I0311 21:35:15.772094   70417 crio.go:451] duration metric: took 2.878719213s to extract the tarball
	I0311 21:35:15.772101   70417 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:35:15.818581   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:15.872635   70417 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:35:15.872658   70417 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:35:15.872667   70417 kubeadm.go:928] updating node { 192.168.61.11 8444 v1.28.4 crio true true} ...
	I0311 21:35:15.872823   70417 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-766430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:35:15.872933   70417 ssh_runner.go:195] Run: crio config
	I0311 21:35:15.928776   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:15.928803   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:15.928818   70417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:35:15.928843   70417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766430 NodeName:default-k8s-diff-port-766430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:35:15.929018   70417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:35:15.929090   70417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:35:15.941853   70417 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:35:15.941908   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:35:15.954936   70417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0311 21:35:15.975236   70417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:35:15.994509   70417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0311 21:35:16.014058   70417 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0311 21:35:16.018972   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:16.035169   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:16.160453   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:35:16.182252   70417 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430 for IP: 192.168.61.11
	I0311 21:35:16.182272   70417 certs.go:194] generating shared ca certs ...
	I0311 21:35:16.182286   70417 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:35:16.182419   70417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:35:16.182465   70417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:35:16.182475   70417 certs.go:256] generating profile certs ...
	I0311 21:35:16.182545   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/client.key
	I0311 21:35:16.182601   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key.2c00376c
	I0311 21:35:16.182635   70417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key
	I0311 21:35:16.182754   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:35:16.182783   70417 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:35:16.182789   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:35:16.182823   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:35:16.182844   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:35:16.182867   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:35:16.182901   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:16.183517   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:35:16.231409   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:35:16.277004   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:35:16.315346   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:35:16.352697   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0311 21:35:16.388570   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:35:16.422830   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:35:16.452562   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 21:35:16.480976   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:35:16.507149   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:35:16.535832   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:35:16.566697   70417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:35:16.587454   70417 ssh_runner.go:195] Run: openssl version
	I0311 21:35:16.593880   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:35:16.608197   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613604   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613673   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.620156   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:35:16.632634   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:35:16.646047   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652530   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652591   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.660480   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:35:16.673572   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:35:16.687161   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692589   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692632   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.705471   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:35:16.718251   70417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:35:16.723979   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:35:16.731335   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:35:16.738485   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:35:16.745489   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:35:16.752295   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:35:16.759251   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:35:16.766128   70417 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:35:16.766237   70417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:35:16.766292   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.806418   70417 cri.go:89] found id: ""
	I0311 21:35:16.806478   70417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:35:16.821434   70417 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:35:16.821455   70417 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:35:16.821462   70417 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:35:16.821514   70417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:35:16.835457   70417 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:35:16.836764   70417 kubeconfig.go:125] found "default-k8s-diff-port-766430" server: "https://192.168.61.11:8444"
	I0311 21:35:16.839163   70417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:35:16.850037   70417 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.11
	I0311 21:35:16.850065   70417 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:35:16.850074   70417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:35:16.850117   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.895532   70417 cri.go:89] found id: ""
	I0311 21:35:16.895612   70417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:35:16.913151   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:35:16.927989   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:35:16.928014   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:35:16.928073   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:35:16.939803   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:35:16.939849   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:35:16.950103   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:35:16.960164   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:35:16.960213   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:35:16.970349   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.980056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:35:16.980098   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.990189   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:35:16.999799   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:35:16.999874   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:35:17.010502   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:35:17.021106   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:17.136170   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.044684   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.296278   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.376702   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.473740   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:35:18.473840   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.974894   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.529099   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.755777   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:20.028341   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:16.414018   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:16.914685   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.414894   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.914319   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.414875   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.914338   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.414496   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.914396   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.414731   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.914149   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.648967   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:22.148024   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:19.474609   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.499907   70417 api_server.go:72] duration metric: took 1.026169594s to wait for apiserver process to appear ...
	I0311 21:35:19.499931   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:35:19.499951   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:19.500566   70417 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0311 21:35:20.000807   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.693958   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:35:22.693991   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:35:22.694006   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.772747   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:22.772792   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.000004   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.004763   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.004805   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.500112   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.507209   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.507236   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.000861   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.006793   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:24.006830   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.500264   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.508242   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:35:24.520230   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:35:24.520255   70417 api_server.go:131] duration metric: took 5.020318338s to wait for apiserver health ...
	I0311 21:35:24.520285   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:24.520291   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:24.522151   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:35:22.029963   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.530052   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:21.414126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:21.914012   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.414680   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.414478   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.914770   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.414370   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.914772   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.413991   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.914516   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.149179   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.647134   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.647725   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.523964   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:35:24.538536   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:35:24.583279   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:35:24.594703   70417 system_pods.go:59] 8 kube-system pods found
	I0311 21:35:24.594730   70417 system_pods.go:61] "coredns-5dd5756b68-pkn9d" [ee4de3f7-1044-4dc9-91dc-d9b23493b0bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:35:24.594737   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [96b9327c-f97d-463f-9d1e-3210b4032aab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:35:24.594751   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [fc650f48-2e28-4219-8571-8b6c43891eb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:35:24.594763   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [c7cc5d40-ad56-4132-ab81-3422ffe1d5b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:35:24.594772   70417 system_pods.go:61] "kube-proxy-cggzr" [f6b7fe4e-7d57-4604-b63d-f9890826b659] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:35:24.594784   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [8a156fec-b2f3-46e8-bf0d-0bf291ef8783] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:35:24.594795   70417 system_pods.go:61] "metrics-server-57f55c9bc5-kxl6n" [ac62700b-a39a-480e-841e-852bf3c66e7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:35:24.594805   70417 system_pods.go:61] "storage-provisioner" [a0b03582-0d90-4a7f-919c-0552046edcb5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:35:24.594821   70417 system_pods.go:74] duration metric: took 11.523907ms to wait for pod list to return data ...
	I0311 21:35:24.594830   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:35:24.606500   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:35:24.606529   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:35:24.606546   70417 node_conditions.go:105] duration metric: took 11.711241ms to run NodePressure ...
	I0311 21:35:24.606565   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:24.893361   70417 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899200   70417 kubeadm.go:733] kubelet initialised
	I0311 21:35:24.899225   70417 kubeadm.go:734] duration metric: took 5.837351ms waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899235   70417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:35:24.905858   70417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:26.912640   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.916566   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:27.029381   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:29.529565   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.414267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:26.914876   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.414469   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.914513   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.414924   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.914126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.414526   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.914039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.414305   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.914438   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.147527   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:33.147694   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.413246   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.912878   70417 pod_ready.go:92] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:31.912899   70417 pod_ready.go:81] duration metric: took 7.007017714s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:31.912908   70417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:33.977091   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:32.029295   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:34.529021   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.414610   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.914472   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.414158   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.914169   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.414745   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.914820   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.414071   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.914228   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.414135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.914695   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.148058   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:37.648200   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.422565   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.921304   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.921328   70417 pod_ready.go:81] duration metric: took 5.008411943s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.921340   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927268   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.927284   70417 pod_ready.go:81] duration metric: took 5.936969ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927292   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932540   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.932563   70417 pod_ready.go:81] duration metric: took 5.264737ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932575   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937456   70417 pod_ready.go:92] pod "kube-proxy-cggzr" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.937473   70417 pod_ready.go:81] duration metric: took 4.892276ms for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937480   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942372   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.942390   70417 pod_ready.go:81] duration metric: took 4.902792ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942401   70417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:38.949452   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.531316   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:39.030491   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.414435   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:36.914157   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.414539   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.914811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.414070   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.914303   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.413935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.914135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.414569   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.914106   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.147355   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.148353   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:40.950204   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.950335   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.528874   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:43.530140   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.414404   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:41.914323   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.414215   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.914566   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.414671   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.914658   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.414703   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.913966   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.414045   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.914260   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.648282   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.148247   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:45.449963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.451576   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.029164   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:48.529137   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:46.914821   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.414210   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.914008   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.413884   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.914160   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.414877   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.914379   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.414293   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.913867   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.148585   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.648372   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:49.949667   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.950874   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.953067   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:50.529616   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.030586   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.414582   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:51.914453   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.414668   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.914816   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.414768   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.914592   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.414743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.914307   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.414000   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.914553   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:55.914636   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:55.957434   70908 cri.go:89] found id: ""
	I0311 21:35:55.957459   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.957470   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:55.957477   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:55.957545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:55.995255   70908 cri.go:89] found id: ""
	I0311 21:35:55.995279   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.995290   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:55.995305   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:55.995364   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:56.038893   70908 cri.go:89] found id: ""
	I0311 21:35:56.038916   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.038926   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:56.038933   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:56.038990   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:54.147165   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.148641   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.647841   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.451057   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.950421   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:55.528922   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.029209   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:00.029912   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.081497   70908 cri.go:89] found id: ""
	I0311 21:35:56.081517   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.081528   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:56.081534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:56.081591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:56.120047   70908 cri.go:89] found id: ""
	I0311 21:35:56.120071   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.120079   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:56.120084   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:56.120156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:56.157350   70908 cri.go:89] found id: ""
	I0311 21:35:56.157370   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.157377   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:56.157382   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:56.157433   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:56.198324   70908 cri.go:89] found id: ""
	I0311 21:35:56.198354   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.198374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:56.198381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:56.198437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:56.236579   70908 cri.go:89] found id: ""
	I0311 21:35:56.236608   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.236619   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:56.236691   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:56.236712   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:56.377789   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:35:56.377809   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:56.377825   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:56.449765   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:56.449807   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:56.502417   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:56.502448   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:56.557205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:56.557241   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.073411   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:59.088205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:59.088287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:59.126458   70908 cri.go:89] found id: ""
	I0311 21:35:59.126486   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.126494   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:59.126499   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:59.126555   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:59.197887   70908 cri.go:89] found id: ""
	I0311 21:35:59.197911   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.197919   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:59.197924   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:59.197967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:59.239523   70908 cri.go:89] found id: ""
	I0311 21:35:59.239552   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.239562   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:59.239570   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:59.239642   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:59.280903   70908 cri.go:89] found id: ""
	I0311 21:35:59.280930   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.280940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:59.280947   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:59.281024   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:59.320218   70908 cri.go:89] found id: ""
	I0311 21:35:59.320242   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.320254   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:59.320260   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:59.320314   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:59.361235   70908 cri.go:89] found id: ""
	I0311 21:35:59.361265   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.361276   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:59.361283   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:59.361352   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:59.409477   70908 cri.go:89] found id: ""
	I0311 21:35:59.409503   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.409514   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:59.409522   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:59.409568   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:59.454704   70908 cri.go:89] found id: ""
	I0311 21:35:59.454728   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.454739   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:59.454748   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:59.454767   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:59.525839   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:59.525864   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:59.569577   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:59.569606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:59.628402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:59.628437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.647181   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:59.647208   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:59.731300   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:00.650515   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.146560   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:01.449702   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.950341   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.030569   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:04.529453   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.232458   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:02.246948   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:02.247025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:02.290561   70908 cri.go:89] found id: ""
	I0311 21:36:02.290588   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.290599   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:02.290605   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:02.290659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:02.333788   70908 cri.go:89] found id: ""
	I0311 21:36:02.333814   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.333821   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:02.333826   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:02.333877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:02.375774   70908 cri.go:89] found id: ""
	I0311 21:36:02.375798   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.375806   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:02.375812   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:02.375862   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:02.414741   70908 cri.go:89] found id: ""
	I0311 21:36:02.414781   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.414803   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:02.414810   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:02.414875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:02.456637   70908 cri.go:89] found id: ""
	I0311 21:36:02.456660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.456670   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:02.456677   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:02.456759   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:02.494633   70908 cri.go:89] found id: ""
	I0311 21:36:02.494660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.494670   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:02.494678   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:02.494738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:02.536187   70908 cri.go:89] found id: ""
	I0311 21:36:02.536212   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.536223   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:02.536230   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:02.536291   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:02.574933   70908 cri.go:89] found id: ""
	I0311 21:36:02.574962   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.574973   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:02.574985   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:02.575001   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:02.656610   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:02.656637   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:02.656653   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:02.730514   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:02.730548   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:02.776009   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:02.776041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:02.829792   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:02.829826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.345568   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:05.360082   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:05.360164   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:05.406106   70908 cri.go:89] found id: ""
	I0311 21:36:05.406131   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.406141   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:05.406147   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:05.406203   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:05.449584   70908 cri.go:89] found id: ""
	I0311 21:36:05.449608   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.449617   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:05.449624   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:05.449680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:05.493869   70908 cri.go:89] found id: ""
	I0311 21:36:05.493898   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.493912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:05.493928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:05.493994   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:05.563506   70908 cri.go:89] found id: ""
	I0311 21:36:05.563532   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.563542   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:05.563549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:05.563600   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:05.630140   70908 cri.go:89] found id: ""
	I0311 21:36:05.630165   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.630172   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:05.630177   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:05.630230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:05.675584   70908 cri.go:89] found id: ""
	I0311 21:36:05.675612   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.675623   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:05.675631   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:05.675689   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:05.720521   70908 cri.go:89] found id: ""
	I0311 21:36:05.720548   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.720557   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:05.720563   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:05.720615   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:05.759323   70908 cri.go:89] found id: ""
	I0311 21:36:05.759351   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.759359   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:05.759367   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:05.759379   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:05.801024   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:05.801050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:05.856330   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:05.856356   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.871299   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:05.871324   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:05.950218   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:05.950245   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:05.950259   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:05.148227   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.647389   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:05.950833   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.449548   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.028964   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:09.029396   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.535502   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:08.552152   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:08.552220   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:08.596602   70908 cri.go:89] found id: ""
	I0311 21:36:08.596707   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.596731   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:08.596755   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:08.596820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:08.641091   70908 cri.go:89] found id: ""
	I0311 21:36:08.641119   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.641130   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:08.641137   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:08.641198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:08.684466   70908 cri.go:89] found id: ""
	I0311 21:36:08.684494   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.684503   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:08.684510   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:08.684570   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:08.730899   70908 cri.go:89] found id: ""
	I0311 21:36:08.730924   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.730931   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:08.730937   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:08.730997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:08.775293   70908 cri.go:89] found id: ""
	I0311 21:36:08.775317   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.775324   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:08.775330   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:08.775387   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:08.816098   70908 cri.go:89] found id: ""
	I0311 21:36:08.816126   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.816137   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:08.816144   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:08.816207   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:08.857413   70908 cri.go:89] found id: ""
	I0311 21:36:08.857449   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.857460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:08.857476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:08.857541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:08.898252   70908 cri.go:89] found id: ""
	I0311 21:36:08.898283   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.898293   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:08.898302   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:08.898313   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:08.955162   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:08.955188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:08.970234   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:08.970258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:09.055025   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:09.055043   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:09.055055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:09.140345   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:09.140376   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:10.148323   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.647037   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:10.450796   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.450839   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.529842   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.029706   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.681542   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:11.697407   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:11.697481   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:11.740239   70908 cri.go:89] found id: ""
	I0311 21:36:11.740264   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.740274   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:11.740280   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:11.740336   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:11.777625   70908 cri.go:89] found id: ""
	I0311 21:36:11.777655   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.777667   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:11.777674   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:11.777745   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:11.817202   70908 cri.go:89] found id: ""
	I0311 21:36:11.817226   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.817233   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:11.817239   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:11.817306   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:11.858912   70908 cri.go:89] found id: ""
	I0311 21:36:11.858933   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.858940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:11.858945   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:11.858998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:11.897841   70908 cri.go:89] found id: ""
	I0311 21:36:11.897876   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.897887   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:11.897895   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:11.897955   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:11.936181   70908 cri.go:89] found id: ""
	I0311 21:36:11.936207   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.936218   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:11.936226   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:11.936293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:11.981882   70908 cri.go:89] found id: ""
	I0311 21:36:11.981905   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.981915   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:11.981922   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:11.981982   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:12.022270   70908 cri.go:89] found id: ""
	I0311 21:36:12.022298   70908 logs.go:276] 0 containers: []
	W0311 21:36:12.022309   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:12.022320   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:12.022333   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:12.074640   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:12.074668   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:12.089854   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:12.089879   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:12.179578   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:12.179595   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:12.179606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:12.263249   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:12.263285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:14.811547   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:14.827075   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:14.827175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:14.870512   70908 cri.go:89] found id: ""
	I0311 21:36:14.870544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.870555   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:14.870563   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:14.870625   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:14.908521   70908 cri.go:89] found id: ""
	I0311 21:36:14.908544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.908553   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:14.908558   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:14.908607   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:14.951702   70908 cri.go:89] found id: ""
	I0311 21:36:14.951729   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.951739   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:14.951746   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:14.951805   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:14.992590   70908 cri.go:89] found id: ""
	I0311 21:36:14.992618   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.992630   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:14.992638   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:14.992698   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:15.034535   70908 cri.go:89] found id: ""
	I0311 21:36:15.034556   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.034563   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:15.034569   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:15.034614   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:15.077175   70908 cri.go:89] found id: ""
	I0311 21:36:15.077200   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.077210   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:15.077218   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:15.077283   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:15.121500   70908 cri.go:89] found id: ""
	I0311 21:36:15.121530   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.121541   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:15.121549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:15.121655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:15.162712   70908 cri.go:89] found id: ""
	I0311 21:36:15.162738   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.162748   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:15.162757   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:15.162776   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:15.241469   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:15.241488   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:15.241499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:15.322257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:15.322291   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:15.368258   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:15.368285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:15.427131   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:15.427163   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:14.648776   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.148710   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.452948   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.949085   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.950111   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.030409   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.529122   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.944348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:17.958629   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:17.958704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:17.995869   70908 cri.go:89] found id: ""
	I0311 21:36:17.995895   70908 logs.go:276] 0 containers: []
	W0311 21:36:17.995904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:17.995914   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:17.995976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:18.032273   70908 cri.go:89] found id: ""
	I0311 21:36:18.032300   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.032308   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:18.032313   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:18.032361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:18.072497   70908 cri.go:89] found id: ""
	I0311 21:36:18.072519   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.072526   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:18.072532   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:18.072578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:18.110091   70908 cri.go:89] found id: ""
	I0311 21:36:18.110119   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.110129   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:18.110136   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:18.110199   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:18.152217   70908 cri.go:89] found id: ""
	I0311 21:36:18.152261   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.152272   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:18.152280   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:18.152347   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:18.193957   70908 cri.go:89] found id: ""
	I0311 21:36:18.193989   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.194000   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:18.194008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:18.194086   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:18.231828   70908 cri.go:89] found id: ""
	I0311 21:36:18.231861   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.231873   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:18.231880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:18.231939   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:18.271862   70908 cri.go:89] found id: ""
	I0311 21:36:18.271896   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.271907   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:18.271917   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:18.271933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:18.325405   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:18.325440   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:18.344560   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:18.344593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:18.425051   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:18.425075   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:18.425093   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:18.513247   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:18.513287   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:19.646758   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.647702   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.649318   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.450692   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.950088   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.028812   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.029828   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.060499   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:21.076648   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:21.076716   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:21.117270   70908 cri.go:89] found id: ""
	I0311 21:36:21.117298   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.117309   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:21.117317   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:21.117388   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:21.159005   70908 cri.go:89] found id: ""
	I0311 21:36:21.159045   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.159056   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:21.159063   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:21.159122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:21.196576   70908 cri.go:89] found id: ""
	I0311 21:36:21.196599   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.196609   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:21.196617   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:21.196677   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:21.237689   70908 cri.go:89] found id: ""
	I0311 21:36:21.237718   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.237729   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:21.237734   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:21.237783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:21.280662   70908 cri.go:89] found id: ""
	I0311 21:36:21.280696   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.280707   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:21.280714   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:21.280795   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:21.321475   70908 cri.go:89] found id: ""
	I0311 21:36:21.321501   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.321511   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:21.321518   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:21.321581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:21.365186   70908 cri.go:89] found id: ""
	I0311 21:36:21.365209   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.365216   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:21.365221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:21.365276   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:21.408678   70908 cri.go:89] found id: ""
	I0311 21:36:21.408713   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.408725   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:21.408754   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:21.408771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:21.466635   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:21.466663   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:21.482596   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:21.482622   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:21.556750   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:21.556769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:21.556780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:21.643095   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:21.643126   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.195112   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:24.208829   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:24.208895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:24.245956   70908 cri.go:89] found id: ""
	I0311 21:36:24.245981   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.245989   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:24.245995   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:24.246053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:24.289740   70908 cri.go:89] found id: ""
	I0311 21:36:24.289766   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.289778   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:24.289784   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:24.289846   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:24.336911   70908 cri.go:89] found id: ""
	I0311 21:36:24.336963   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.336977   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:24.336986   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:24.337057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:24.381715   70908 cri.go:89] found id: ""
	I0311 21:36:24.381739   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.381753   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:24.381761   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:24.381817   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:24.423759   70908 cri.go:89] found id: ""
	I0311 21:36:24.423787   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.423797   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:24.423805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:24.423882   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:24.468903   70908 cri.go:89] found id: ""
	I0311 21:36:24.468931   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.468946   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:24.468954   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:24.469013   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:24.509602   70908 cri.go:89] found id: ""
	I0311 21:36:24.509629   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.509639   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:24.509646   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:24.509706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:24.551483   70908 cri.go:89] found id: ""
	I0311 21:36:24.551511   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.551522   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:24.551532   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:24.551545   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:24.567123   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:24.567154   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:24.644215   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:24.644247   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:24.644262   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:24.726438   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:24.726469   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.779567   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:24.779596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:26.146823   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.148291   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:26.450637   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.949850   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:25.528542   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.529375   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:29.529701   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.337785   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:27.352504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:27.352578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:27.395787   70908 cri.go:89] found id: ""
	I0311 21:36:27.395809   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.395817   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:27.395823   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:27.395869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:27.441800   70908 cri.go:89] found id: ""
	I0311 21:36:27.441826   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.441834   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:27.441839   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:27.441893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:27.481761   70908 cri.go:89] found id: ""
	I0311 21:36:27.481791   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.481802   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:27.481809   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:27.481868   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:27.526981   70908 cri.go:89] found id: ""
	I0311 21:36:27.527011   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.527029   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:27.527037   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:27.527130   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:27.566569   70908 cri.go:89] found id: ""
	I0311 21:36:27.566602   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.566614   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:27.566622   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:27.566682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:27.607434   70908 cri.go:89] found id: ""
	I0311 21:36:27.607456   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.607464   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:27.607469   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:27.607529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:27.652648   70908 cri.go:89] found id: ""
	I0311 21:36:27.652674   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.652681   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:27.652686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:27.652756   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:27.691105   70908 cri.go:89] found id: ""
	I0311 21:36:27.691136   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.691148   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:27.691158   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:27.691173   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:27.706451   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:27.706477   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:27.788935   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:27.788959   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:27.788975   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:27.875721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:27.875758   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:27.927920   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:27.927951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.487728   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:30.503425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:30.503508   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:30.550846   70908 cri.go:89] found id: ""
	I0311 21:36:30.550868   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.550875   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:30.550881   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:30.550928   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:30.586886   70908 cri.go:89] found id: ""
	I0311 21:36:30.586915   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.586925   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:30.586934   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:30.586991   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:30.627849   70908 cri.go:89] found id: ""
	I0311 21:36:30.627884   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.627895   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:30.627902   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:30.627965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:30.669188   70908 cri.go:89] found id: ""
	I0311 21:36:30.669209   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.669216   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:30.669222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:30.669266   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:30.711676   70908 cri.go:89] found id: ""
	I0311 21:36:30.711697   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.711705   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:30.711710   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:30.711758   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:30.754218   70908 cri.go:89] found id: ""
	I0311 21:36:30.754240   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.754248   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:30.754253   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:30.754299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:30.791224   70908 cri.go:89] found id: ""
	I0311 21:36:30.791255   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.791263   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:30.791269   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:30.791328   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:30.831263   70908 cri.go:89] found id: ""
	I0311 21:36:30.831291   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.831301   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:30.831311   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:30.831326   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:30.876574   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:30.876600   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.928483   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:30.928509   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:30.944642   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:30.944665   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:31.026406   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:31.026428   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:31.026444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:30.648859   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.147907   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:30.952483   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.451714   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:32.028484   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:34.028948   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.611104   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:33.625644   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:33.625706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:33.664787   70908 cri.go:89] found id: ""
	I0311 21:36:33.664816   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.664825   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:33.664830   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:33.664894   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:33.704636   70908 cri.go:89] found id: ""
	I0311 21:36:33.704659   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.704666   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:33.704672   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:33.704717   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:33.744797   70908 cri.go:89] found id: ""
	I0311 21:36:33.744837   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.744848   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:33.744855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:33.744917   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:33.787435   70908 cri.go:89] found id: ""
	I0311 21:36:33.787464   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.787474   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:33.787482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:33.787541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:33.826578   70908 cri.go:89] found id: ""
	I0311 21:36:33.826606   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.826617   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:33.826624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:33.826684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:33.864854   70908 cri.go:89] found id: ""
	I0311 21:36:33.864875   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.864882   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:33.864887   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:33.864934   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:33.905366   70908 cri.go:89] found id: ""
	I0311 21:36:33.905397   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.905409   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:33.905416   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:33.905477   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:33.950196   70908 cri.go:89] found id: ""
	I0311 21:36:33.950222   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.950232   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:33.950243   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:33.950258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:34.001016   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:34.001049   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:34.059102   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:34.059131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:34.075879   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:34.075908   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:34.177114   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:34.177138   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:34.177161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:35.647611   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.147941   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:35.950147   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.449090   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.030072   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.527952   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.756459   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:36.772781   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:36.772867   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:36.820076   70908 cri.go:89] found id: ""
	I0311 21:36:36.820103   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.820111   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:36.820118   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:36.820169   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:36.859279   70908 cri.go:89] found id: ""
	I0311 21:36:36.859306   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.859317   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:36.859324   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:36.859383   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:36.899669   70908 cri.go:89] found id: ""
	I0311 21:36:36.899694   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.899705   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:36.899712   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:36.899770   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:36.938826   70908 cri.go:89] found id: ""
	I0311 21:36:36.938853   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.938864   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:36.938872   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:36.938957   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:36.976659   70908 cri.go:89] found id: ""
	I0311 21:36:36.976685   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.976693   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:36.976703   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:36.976772   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:37.015439   70908 cri.go:89] found id: ""
	I0311 21:36:37.015462   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.015469   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:37.015474   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:37.015519   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:37.057469   70908 cri.go:89] found id: ""
	I0311 21:36:37.057496   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.057507   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:37.057514   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:37.057579   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:37.106287   70908 cri.go:89] found id: ""
	I0311 21:36:37.106316   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.106325   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:37.106335   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:37.106352   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:37.122333   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:37.122367   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:37.197708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:37.197731   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:37.197742   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:37.281911   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:37.281944   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:37.335978   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:37.336011   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:39.891583   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:39.914741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:39.914823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:39.955751   70908 cri.go:89] found id: ""
	I0311 21:36:39.955773   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.955781   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:39.955786   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:39.955837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:39.997604   70908 cri.go:89] found id: ""
	I0311 21:36:39.997632   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.997642   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:39.997649   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:39.997711   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:40.039138   70908 cri.go:89] found id: ""
	I0311 21:36:40.039168   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.039178   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:40.039186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:40.039230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:40.079906   70908 cri.go:89] found id: ""
	I0311 21:36:40.079934   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.079945   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:40.079952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:40.080017   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:40.124116   70908 cri.go:89] found id: ""
	I0311 21:36:40.124141   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.124152   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:40.124159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:40.124221   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:40.165078   70908 cri.go:89] found id: ""
	I0311 21:36:40.165099   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.165108   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:40.165113   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:40.165158   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:40.203928   70908 cri.go:89] found id: ""
	I0311 21:36:40.203954   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.203962   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:40.203971   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:40.204018   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:40.244755   70908 cri.go:89] found id: ""
	I0311 21:36:40.244783   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.244793   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:40.244803   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:40.244819   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:40.302090   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:40.302125   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:40.318071   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:40.318097   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:40.405336   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:40.405363   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:40.405378   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:40.493262   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:40.493298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:40.148095   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.651483   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.449200   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.450259   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.528526   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.533619   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:45.029285   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:43.052419   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:43.068300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:43.068378   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:43.109665   70908 cri.go:89] found id: ""
	I0311 21:36:43.109701   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.109717   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:43.109725   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:43.109789   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:43.152233   70908 cri.go:89] found id: ""
	I0311 21:36:43.152253   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.152260   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:43.152265   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:43.152311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:43.194969   70908 cri.go:89] found id: ""
	I0311 21:36:43.194995   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.195002   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:43.195008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:43.195056   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:43.234555   70908 cri.go:89] found id: ""
	I0311 21:36:43.234581   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.234592   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:43.234597   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:43.234651   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:43.275188   70908 cri.go:89] found id: ""
	I0311 21:36:43.275214   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.275224   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:43.275232   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:43.275287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:43.314481   70908 cri.go:89] found id: ""
	I0311 21:36:43.314507   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.314515   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:43.314521   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:43.314580   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:43.353287   70908 cri.go:89] found id: ""
	I0311 21:36:43.353317   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.353328   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:43.353336   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:43.353395   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:43.396112   70908 cri.go:89] found id: ""
	I0311 21:36:43.396138   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.396150   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:43.396160   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:43.396175   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:43.456116   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:43.456143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:43.472992   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:43.473023   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:43.558281   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:43.558311   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:43.558327   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:43.641849   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:43.641885   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:45.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.147574   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:44.954864   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.450806   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.029669   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.529505   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:46.187444   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:46.202848   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:46.202911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:46.244843   70908 cri.go:89] found id: ""
	I0311 21:36:46.244872   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.244880   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:46.244886   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:46.244933   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:46.297789   70908 cri.go:89] found id: ""
	I0311 21:36:46.297820   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.297831   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:46.297838   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:46.297903   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:46.353104   70908 cri.go:89] found id: ""
	I0311 21:36:46.353127   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.353134   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:46.353140   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:46.353211   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:46.426767   70908 cri.go:89] found id: ""
	I0311 21:36:46.426792   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.426799   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:46.426804   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:46.426858   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:46.469850   70908 cri.go:89] found id: ""
	I0311 21:36:46.469881   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.469891   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:46.469899   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:46.469960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:46.510692   70908 cri.go:89] found id: ""
	I0311 21:36:46.510718   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.510726   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:46.510732   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:46.510787   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:46.554445   70908 cri.go:89] found id: ""
	I0311 21:36:46.554468   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.554475   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:46.554482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:46.554527   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:46.592417   70908 cri.go:89] found id: ""
	I0311 21:36:46.592448   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.592458   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:46.592467   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:46.592480   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:46.607106   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:46.607146   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:46.691556   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:46.691575   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:46.691587   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:46.772468   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:46.772503   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:46.814478   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:46.814512   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.368451   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:49.383504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:49.383573   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:49.427392   70908 cri.go:89] found id: ""
	I0311 21:36:49.427415   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.427426   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:49.427434   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:49.427493   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:49.469022   70908 cri.go:89] found id: ""
	I0311 21:36:49.469044   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.469052   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:49.469059   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:49.469106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:49.510755   70908 cri.go:89] found id: ""
	I0311 21:36:49.510781   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.510792   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:49.510800   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:49.510886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:49.556594   70908 cri.go:89] found id: ""
	I0311 21:36:49.556631   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.556642   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:49.556649   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:49.556710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:49.597035   70908 cri.go:89] found id: ""
	I0311 21:36:49.597059   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.597067   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:49.597072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:49.597138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:49.642947   70908 cri.go:89] found id: ""
	I0311 21:36:49.642975   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.642985   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:49.642993   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:49.643051   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:49.681401   70908 cri.go:89] found id: ""
	I0311 21:36:49.681423   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.681430   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:49.681435   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:49.681478   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:49.718498   70908 cri.go:89] found id: ""
	I0311 21:36:49.718529   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.718539   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:49.718549   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:49.718563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:49.764483   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:49.764515   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.821261   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:49.821293   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:49.837110   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:49.837135   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:49.918507   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:49.918529   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:49.918541   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:49.648198   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.146837   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.450941   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:51.950760   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.030288   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.528831   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.500354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:52.516722   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:52.516811   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:52.563312   70908 cri.go:89] found id: ""
	I0311 21:36:52.563340   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.563354   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:52.563362   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:52.563421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:52.603545   70908 cri.go:89] found id: ""
	I0311 21:36:52.603572   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.603581   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:52.603588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:52.603657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:52.645624   70908 cri.go:89] found id: ""
	I0311 21:36:52.645648   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.645658   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:52.645665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:52.645722   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:52.693335   70908 cri.go:89] found id: ""
	I0311 21:36:52.693363   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.693373   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:52.693380   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:52.693437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:52.740272   70908 cri.go:89] found id: ""
	I0311 21:36:52.740310   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.740331   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:52.740341   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:52.740398   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:52.786241   70908 cri.go:89] found id: ""
	I0311 21:36:52.786276   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.786285   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:52.786291   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:52.786355   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:52.825013   70908 cri.go:89] found id: ""
	I0311 21:36:52.825042   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.825053   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:52.825061   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:52.825117   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:52.862867   70908 cri.go:89] found id: ""
	I0311 21:36:52.862892   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.862901   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:52.862908   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:52.862922   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:52.917005   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:52.917036   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:52.932086   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:52.932112   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:53.012379   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:53.012402   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:53.012413   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:53.096881   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:53.096913   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:55.640142   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:55.656664   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:55.656749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:55.697962   70908 cri.go:89] found id: ""
	I0311 21:36:55.697992   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.698000   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:55.698005   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:55.698059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:55.741888   70908 cri.go:89] found id: ""
	I0311 21:36:55.741910   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.741917   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:55.741921   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:55.741965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:55.779352   70908 cri.go:89] found id: ""
	I0311 21:36:55.779372   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.779381   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:55.779386   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:55.779430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:55.819496   70908 cri.go:89] found id: ""
	I0311 21:36:55.819530   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.819541   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:55.819549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:55.819612   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:55.859384   70908 cri.go:89] found id: ""
	I0311 21:36:55.859412   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.859419   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:55.859424   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:55.859473   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:55.899415   70908 cri.go:89] found id: ""
	I0311 21:36:55.899438   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.899445   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:55.899450   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:55.899496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:55.938595   70908 cri.go:89] found id: ""
	I0311 21:36:55.938625   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.938637   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:55.938645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:55.938710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:55.980064   70908 cri.go:89] found id: ""
	I0311 21:36:55.980089   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.980096   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:55.980103   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:55.980115   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:55.996222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:55.996297   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:36:54.147743   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.150270   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.648829   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.450767   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.949091   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.950443   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.529184   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:59.029323   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	W0311 21:36:56.081046   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:56.081074   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:56.081090   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:56.167748   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:56.167773   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:56.221118   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:56.221150   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:58.772403   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:58.789349   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:58.789421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:58.829945   70908 cri.go:89] found id: ""
	I0311 21:36:58.829974   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.829985   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:58.829993   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:58.830059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:58.877190   70908 cri.go:89] found id: ""
	I0311 21:36:58.877214   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.877224   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:58.877231   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:58.877295   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:58.920086   70908 cri.go:89] found id: ""
	I0311 21:36:58.920113   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.920122   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:58.920128   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:58.920189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:58.956864   70908 cri.go:89] found id: ""
	I0311 21:36:58.956890   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.956900   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:58.956907   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:58.956967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:58.999363   70908 cri.go:89] found id: ""
	I0311 21:36:58.999390   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.999400   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:58.999408   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:58.999469   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:59.041759   70908 cri.go:89] found id: ""
	I0311 21:36:59.041787   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.041797   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:59.041803   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:59.041850   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:59.084378   70908 cri.go:89] found id: ""
	I0311 21:36:59.084406   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.084417   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:59.084425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:59.084479   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:59.124105   70908 cri.go:89] found id: ""
	I0311 21:36:59.124151   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.124163   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:59.124173   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:59.124188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:59.202060   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:59.202083   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:59.202098   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:59.284025   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:59.284060   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:59.327926   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:59.327951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:59.382505   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:59.382533   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:01.147260   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.149020   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.450230   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.949834   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.529173   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.532427   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.900084   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:01.914495   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:01.914552   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:01.956887   70908 cri.go:89] found id: ""
	I0311 21:37:01.956912   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.956922   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:01.956929   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:01.956986   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:01.995358   70908 cri.go:89] found id: ""
	I0311 21:37:01.995385   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.995394   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:01.995399   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:01.995448   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:02.033949   70908 cri.go:89] found id: ""
	I0311 21:37:02.033974   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.033984   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:02.033991   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:02.034049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:02.074348   70908 cri.go:89] found id: ""
	I0311 21:37:02.074372   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.074382   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:02.074390   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:02.074449   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:02.112456   70908 cri.go:89] found id: ""
	I0311 21:37:02.112479   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.112486   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:02.112491   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:02.112554   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:02.155102   70908 cri.go:89] found id: ""
	I0311 21:37:02.155130   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.155138   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:02.155149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:02.155205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:02.191359   70908 cri.go:89] found id: ""
	I0311 21:37:02.191386   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.191393   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:02.191399   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:02.191450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:02.236178   70908 cri.go:89] found id: ""
	I0311 21:37:02.236203   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.236211   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:02.236220   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:02.236231   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:02.285794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:02.285818   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:02.342348   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:02.342387   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:02.357230   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:02.357257   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:02.431044   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:02.431064   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:02.431076   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.019473   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:05.035841   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:05.035901   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:05.082013   70908 cri.go:89] found id: ""
	I0311 21:37:05.082034   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.082041   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:05.082046   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:05.082091   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:05.126236   70908 cri.go:89] found id: ""
	I0311 21:37:05.126257   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.126265   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:05.126270   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:05.126311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:05.170573   70908 cri.go:89] found id: ""
	I0311 21:37:05.170601   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.170608   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:05.170614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:05.170658   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:05.213921   70908 cri.go:89] found id: ""
	I0311 21:37:05.213948   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.213958   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:05.213965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:05.214025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:05.261178   70908 cri.go:89] found id: ""
	I0311 21:37:05.261206   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.261213   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:05.261221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:05.261273   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:05.306007   70908 cri.go:89] found id: ""
	I0311 21:37:05.306037   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.306045   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:05.306051   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:05.306106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:05.346653   70908 cri.go:89] found id: ""
	I0311 21:37:05.346679   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.346688   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:05.346694   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:05.346752   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:05.384587   70908 cri.go:89] found id: ""
	I0311 21:37:05.384626   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.384637   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:05.384648   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:05.384664   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:05.440676   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:05.440709   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:05.456989   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:05.457018   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:05.553900   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:05.553932   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:05.553947   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.633270   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:05.633300   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:05.647077   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.146975   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.449502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.450008   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.028642   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.529826   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.181935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:08.198179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:08.198251   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:08.236484   70908 cri.go:89] found id: ""
	I0311 21:37:08.236506   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.236516   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:08.236524   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:08.236578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:08.277701   70908 cri.go:89] found id: ""
	I0311 21:37:08.277731   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.277739   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:08.277745   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:08.277804   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:08.319559   70908 cri.go:89] found id: ""
	I0311 21:37:08.319585   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.319596   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:08.319604   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:08.319666   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:08.359752   70908 cri.go:89] found id: ""
	I0311 21:37:08.359777   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.359785   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:08.359791   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:08.359849   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:08.397432   70908 cri.go:89] found id: ""
	I0311 21:37:08.397453   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.397460   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:08.397465   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:08.397511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:08.438708   70908 cri.go:89] found id: ""
	I0311 21:37:08.438732   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.438742   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:08.438749   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:08.438807   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:08.479511   70908 cri.go:89] found id: ""
	I0311 21:37:08.479533   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.479560   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:08.479566   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:08.479620   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:08.521634   70908 cri.go:89] found id: ""
	I0311 21:37:08.521659   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.521670   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:08.521680   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:08.521693   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:08.577033   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:08.577065   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:08.592006   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:08.592030   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:08.680862   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:08.680903   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:08.680919   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:08.764991   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:08.765037   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:10.147819   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.648352   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:10.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.949571   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.028245   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:13.028689   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:15.034232   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.313168   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:11.326808   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:11.326876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:11.364223   70908 cri.go:89] found id: ""
	I0311 21:37:11.364246   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.364254   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:11.364259   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:11.364311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:11.401361   70908 cri.go:89] found id: ""
	I0311 21:37:11.401391   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.401402   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:11.401409   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:11.401459   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:11.441927   70908 cri.go:89] found id: ""
	I0311 21:37:11.441950   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.441957   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:11.441962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:11.442015   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:11.480804   70908 cri.go:89] found id: ""
	I0311 21:37:11.480836   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.480847   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:11.480855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:11.480913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:11.520135   70908 cri.go:89] found id: ""
	I0311 21:37:11.520166   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.520177   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:11.520193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:11.520255   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:11.559214   70908 cri.go:89] found id: ""
	I0311 21:37:11.559244   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.559255   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:11.559263   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:11.559322   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:11.597346   70908 cri.go:89] found id: ""
	I0311 21:37:11.597374   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.597383   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:11.597391   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:11.597452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:11.646095   70908 cri.go:89] found id: ""
	I0311 21:37:11.646118   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.646127   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:11.646137   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:11.646167   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:11.691813   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:11.691844   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:11.745270   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:11.745303   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:11.761107   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:11.761131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:11.841033   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:11.841059   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:11.841074   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:14.431709   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:14.447064   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:14.447131   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:14.493094   70908 cri.go:89] found id: ""
	I0311 21:37:14.493132   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.493140   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:14.493146   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:14.493195   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:14.537391   70908 cri.go:89] found id: ""
	I0311 21:37:14.537415   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.537423   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:14.537428   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:14.537487   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:14.576284   70908 cri.go:89] found id: ""
	I0311 21:37:14.576306   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.576313   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:14.576319   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:14.576375   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:14.627057   70908 cri.go:89] found id: ""
	I0311 21:37:14.627086   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.627097   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:14.627105   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:14.627163   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:14.669204   70908 cri.go:89] found id: ""
	I0311 21:37:14.669226   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.669233   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:14.669238   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:14.669293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:14.708787   70908 cri.go:89] found id: ""
	I0311 21:37:14.708812   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.708820   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:14.708826   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:14.708892   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:14.749795   70908 cri.go:89] found id: ""
	I0311 21:37:14.749819   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.749828   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:14.749835   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:14.749893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:14.794871   70908 cri.go:89] found id: ""
	I0311 21:37:14.794900   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.794911   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:14.794922   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:14.794936   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:14.850022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:14.850050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:14.866589   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:14.866618   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:14.968887   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:14.968906   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:14.968921   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:15.047376   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:15.047404   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:14.648528   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:16.649275   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:18.649842   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:14.951387   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.451239   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.529411   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:20.030012   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.599834   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:17.613610   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:17.613665   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:17.655340   70908 cri.go:89] found id: ""
	I0311 21:37:17.655361   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.655369   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:17.655374   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:17.655416   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:17.695071   70908 cri.go:89] found id: ""
	I0311 21:37:17.695103   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.695114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:17.695121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:17.695178   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:17.731914   70908 cri.go:89] found id: ""
	I0311 21:37:17.731938   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.731946   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:17.731952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:17.732012   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:17.768198   70908 cri.go:89] found id: ""
	I0311 21:37:17.768224   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.768236   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:17.768242   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:17.768301   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:17.802881   70908 cri.go:89] found id: ""
	I0311 21:37:17.802909   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.802920   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:17.802928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:17.802983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:17.841660   70908 cri.go:89] found id: ""
	I0311 21:37:17.841684   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.841692   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:17.841698   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:17.841749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:17.880154   70908 cri.go:89] found id: ""
	I0311 21:37:17.880183   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.880196   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:17.880205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:17.880260   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:17.919797   70908 cri.go:89] found id: ""
	I0311 21:37:17.919822   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.919829   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:17.919837   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:17.919847   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:17.976607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:17.976636   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:17.993313   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:17.993339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:18.069928   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:18.069956   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:18.069973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:18.152257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:18.152285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:20.706553   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:20.721148   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:20.721214   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:20.762913   70908 cri.go:89] found id: ""
	I0311 21:37:20.762935   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.762943   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:20.762952   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:20.762997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:20.811120   70908 cri.go:89] found id: ""
	I0311 21:37:20.811147   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.811158   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:20.811165   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:20.811225   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:20.848987   70908 cri.go:89] found id: ""
	I0311 21:37:20.849015   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.849026   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:20.849033   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:20.849098   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:20.896201   70908 cri.go:89] found id: ""
	I0311 21:37:20.896226   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.896233   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:20.896240   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:20.896299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:20.936570   70908 cri.go:89] found id: ""
	I0311 21:37:20.936595   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.936603   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:20.936608   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:20.936657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:20.977535   70908 cri.go:89] found id: ""
	I0311 21:37:20.977565   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.977576   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:20.977584   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:20.977647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:21.015370   70908 cri.go:89] found id: ""
	I0311 21:37:21.015395   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.015405   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:21.015413   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:21.015472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:21.146868   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:23.147272   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:19.950972   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.450298   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.528109   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.530216   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:21.056190   70908 cri.go:89] found id: ""
	I0311 21:37:21.056214   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.056225   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:21.056235   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:21.056255   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:21.112022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:21.112051   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:21.128841   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:21.128872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:21.209690   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:21.209716   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:21.209732   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:21.291064   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:21.291099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:23.844334   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:23.860000   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:23.860061   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:23.899777   70908 cri.go:89] found id: ""
	I0311 21:37:23.899805   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.899814   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:23.899820   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:23.899879   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:23.941510   70908 cri.go:89] found id: ""
	I0311 21:37:23.941537   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.941547   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:23.941555   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:23.941627   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:23.980564   70908 cri.go:89] found id: ""
	I0311 21:37:23.980592   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.980602   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:23.980614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:23.980676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:24.020310   70908 cri.go:89] found id: ""
	I0311 21:37:24.020337   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.020348   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:24.020354   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:24.020410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:24.059320   70908 cri.go:89] found id: ""
	I0311 21:37:24.059349   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.059359   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:24.059367   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:24.059424   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:24.096625   70908 cri.go:89] found id: ""
	I0311 21:37:24.096652   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.096660   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:24.096666   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:24.096723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:24.137068   70908 cri.go:89] found id: ""
	I0311 21:37:24.137100   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.137112   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:24.137121   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:24.137182   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:24.181298   70908 cri.go:89] found id: ""
	I0311 21:37:24.181325   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.181336   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:24.181348   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:24.181364   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:24.265423   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:24.265454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:24.318088   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:24.318113   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:24.374402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:24.374430   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:24.388934   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:24.388962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:24.475842   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:25.647164   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.650157   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.948984   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.949444   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:28.950697   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.030240   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:29.030848   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.976017   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:26.991533   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:26.991602   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:27.034750   70908 cri.go:89] found id: ""
	I0311 21:37:27.034769   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.034776   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:27.034781   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:27.034837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:27.073275   70908 cri.go:89] found id: ""
	I0311 21:37:27.073301   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.073309   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:27.073317   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:27.073363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:27.113396   70908 cri.go:89] found id: ""
	I0311 21:37:27.113418   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.113425   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:27.113431   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:27.113482   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:27.157442   70908 cri.go:89] found id: ""
	I0311 21:37:27.157465   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.157475   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:27.157482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:27.157534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:27.197277   70908 cri.go:89] found id: ""
	I0311 21:37:27.197302   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.197309   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:27.197315   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:27.197363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:27.237967   70908 cri.go:89] found id: ""
	I0311 21:37:27.237991   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.237999   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:27.238005   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:27.238077   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:27.280434   70908 cri.go:89] found id: ""
	I0311 21:37:27.280459   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.280467   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:27.280472   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:27.280535   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:27.334940   70908 cri.go:89] found id: ""
	I0311 21:37:27.334970   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.334982   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:27.334992   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:27.335010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:27.402535   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:27.402570   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:27.416758   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:27.416787   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:27.492762   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:27.492786   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:27.492803   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:27.576989   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:27.577032   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.124039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:30.138419   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:30.138483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:30.180900   70908 cri.go:89] found id: ""
	I0311 21:37:30.180926   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.180936   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:30.180944   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:30.180998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:30.222886   70908 cri.go:89] found id: ""
	I0311 21:37:30.222913   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.222921   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:30.222926   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:30.222976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:30.264332   70908 cri.go:89] found id: ""
	I0311 21:37:30.264357   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.264367   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:30.264376   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:30.264436   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:30.307084   70908 cri.go:89] found id: ""
	I0311 21:37:30.307112   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.307123   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:30.307130   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:30.307188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:30.345954   70908 cri.go:89] found id: ""
	I0311 21:37:30.345979   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.345990   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:30.345997   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:30.346057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:30.389408   70908 cri.go:89] found id: ""
	I0311 21:37:30.389439   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.389450   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:30.389457   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:30.389517   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:30.438380   70908 cri.go:89] found id: ""
	I0311 21:37:30.438410   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.438420   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:30.438427   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:30.438489   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:30.479860   70908 cri.go:89] found id: ""
	I0311 21:37:30.479884   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.479895   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:30.479906   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:30.479920   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:30.535831   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:30.535857   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:30.552702   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:30.552725   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:30.633417   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:30.633439   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:30.633454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:30.723106   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:30.723143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.147993   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:32.152839   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.450942   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.949947   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.528469   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.529721   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.270654   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:33.296640   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:33.296710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:33.366053   70908 cri.go:89] found id: ""
	I0311 21:37:33.366082   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.366093   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:33.366101   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:33.366161   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:33.421455   70908 cri.go:89] found id: ""
	I0311 21:37:33.421488   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.421501   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:33.421509   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:33.421583   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:33.464555   70908 cri.go:89] found id: ""
	I0311 21:37:33.464579   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.464586   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:33.464592   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:33.464647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:33.507044   70908 cri.go:89] found id: ""
	I0311 21:37:33.507086   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.507100   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:33.507110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:33.507175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:33.561446   70908 cri.go:89] found id: ""
	I0311 21:37:33.561518   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.561532   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:33.561540   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:33.561601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:33.604496   70908 cri.go:89] found id: ""
	I0311 21:37:33.604519   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.604528   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:33.604534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:33.604591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:33.645754   70908 cri.go:89] found id: ""
	I0311 21:37:33.645781   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.645791   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:33.645797   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:33.645869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:33.690041   70908 cri.go:89] found id: ""
	I0311 21:37:33.690071   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.690082   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:33.690092   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:33.690108   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:33.765708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:33.765737   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:33.765752   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:33.848869   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:33.848906   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:33.900191   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:33.900223   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:33.957101   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:33.957138   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:34.646831   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.647640   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.449429   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.948831   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.028141   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.028588   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.028676   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.474442   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:36.490159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:36.490231   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:36.537784   70908 cri.go:89] found id: ""
	I0311 21:37:36.537812   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.537822   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:36.537829   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:36.537885   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:36.581192   70908 cri.go:89] found id: ""
	I0311 21:37:36.581219   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.581230   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:36.581237   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:36.581297   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:36.620448   70908 cri.go:89] found id: ""
	I0311 21:37:36.620480   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.620492   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:36.620501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:36.620566   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:36.662135   70908 cri.go:89] found id: ""
	I0311 21:37:36.662182   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.662193   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:36.662203   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:36.662268   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:36.708138   70908 cri.go:89] found id: ""
	I0311 21:37:36.708178   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.708188   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:36.708198   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:36.708267   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:36.749668   70908 cri.go:89] found id: ""
	I0311 21:37:36.749697   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.749708   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:36.749717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:36.749783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:36.788455   70908 cri.go:89] found id: ""
	I0311 21:37:36.788476   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.788483   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:36.788488   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:36.788534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:36.830216   70908 cri.go:89] found id: ""
	I0311 21:37:36.830244   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.830257   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:36.830267   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:36.830285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:36.915306   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:36.915336   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:36.958861   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:36.958892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:37.014463   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:37.014489   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:37.029979   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:37.030010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:37.106840   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:39.607929   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:39.626247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:39.626307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:39.667409   70908 cri.go:89] found id: ""
	I0311 21:37:39.667436   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.667446   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:39.667454   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:39.667509   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:39.714167   70908 cri.go:89] found id: ""
	I0311 21:37:39.714198   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.714210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:39.714217   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:39.714275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:39.754759   70908 cri.go:89] found id: ""
	I0311 21:37:39.754787   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.754798   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:39.754805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:39.754865   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:39.794999   70908 cri.go:89] found id: ""
	I0311 21:37:39.795028   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.795038   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:39.795045   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:39.795108   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:39.836284   70908 cri.go:89] found id: ""
	I0311 21:37:39.836310   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.836321   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:39.836328   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:39.836386   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:39.876487   70908 cri.go:89] found id: ""
	I0311 21:37:39.876518   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.876530   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:39.876539   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:39.876601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:39.918750   70908 cri.go:89] found id: ""
	I0311 21:37:39.918785   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.918796   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:39.918813   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:39.918871   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:39.958486   70908 cri.go:89] found id: ""
	I0311 21:37:39.958517   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.958529   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:39.958537   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:39.958550   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:39.973899   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:39.973925   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:40.055954   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:40.055980   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:40.055995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:40.144801   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:40.144826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:40.189692   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:40.189722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:39.148581   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:41.647869   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:43.648550   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.949502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.951277   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.528844   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:44.529317   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.748909   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:42.763794   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:42.763877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:42.801470   70908 cri.go:89] found id: ""
	I0311 21:37:42.801493   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.801500   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:42.801506   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:42.801561   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:42.846267   70908 cri.go:89] found id: ""
	I0311 21:37:42.846294   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.846301   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:42.846307   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:42.846357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:42.890257   70908 cri.go:89] found id: ""
	I0311 21:37:42.890283   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.890294   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:42.890301   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:42.890357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:42.933605   70908 cri.go:89] found id: ""
	I0311 21:37:42.933628   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.933636   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:42.933643   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:42.933699   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:42.979020   70908 cri.go:89] found id: ""
	I0311 21:37:42.979043   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.979052   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:42.979059   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:42.979122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:43.021695   70908 cri.go:89] found id: ""
	I0311 21:37:43.021724   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.021734   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:43.021741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:43.021801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:43.064356   70908 cri.go:89] found id: ""
	I0311 21:37:43.064398   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.064406   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:43.064412   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:43.064457   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:43.101878   70908 cri.go:89] found id: ""
	I0311 21:37:43.101901   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.101909   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:43.101917   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:43.101930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:43.185836   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:43.185861   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:43.185874   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:43.268879   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:43.268912   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:43.319582   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:43.319614   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:43.374996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:43.375022   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:45.890408   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:45.905973   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:45.906041   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:45.951994   70908 cri.go:89] found id: ""
	I0311 21:37:45.952025   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.952040   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:45.952049   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:45.952112   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:45.992913   70908 cri.go:89] found id: ""
	I0311 21:37:45.992953   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.992964   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:45.992971   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:45.993034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:46.036306   70908 cri.go:89] found id: ""
	I0311 21:37:46.036334   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.036345   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:46.036353   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:46.036410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:46.147754   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:48.647534   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:45.450180   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:47.949568   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.532244   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.028905   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.077532   70908 cri.go:89] found id: ""
	I0311 21:37:46.077564   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.077576   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:46.077583   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:46.077633   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:46.115953   70908 cri.go:89] found id: ""
	I0311 21:37:46.115976   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.115983   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:46.115990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:46.116072   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:46.155665   70908 cri.go:89] found id: ""
	I0311 21:37:46.155699   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.155709   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:46.155717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:46.155775   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:46.197650   70908 cri.go:89] found id: ""
	I0311 21:37:46.197677   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.197696   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:46.197705   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:46.197766   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:46.243006   70908 cri.go:89] found id: ""
	I0311 21:37:46.243030   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.243037   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:46.243045   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:46.243058   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:46.294668   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:46.294696   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:46.308700   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:46.308721   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:46.387188   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:46.387207   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:46.387219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:46.480390   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:46.480423   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.027202   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:49.042292   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:49.042361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:49.081547   70908 cri.go:89] found id: ""
	I0311 21:37:49.081568   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.081579   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:49.081585   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:49.081632   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:49.127438   70908 cri.go:89] found id: ""
	I0311 21:37:49.127467   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.127477   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:49.127485   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:49.127545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:49.173992   70908 cri.go:89] found id: ""
	I0311 21:37:49.174024   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.174033   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:49.174042   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:49.174114   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:49.217087   70908 cri.go:89] found id: ""
	I0311 21:37:49.217120   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.217130   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:49.217138   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:49.217198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:49.255929   70908 cri.go:89] found id: ""
	I0311 21:37:49.255955   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.255970   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:49.255978   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:49.256037   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:49.296373   70908 cri.go:89] found id: ""
	I0311 21:37:49.296399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.296409   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:49.296417   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:49.296474   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:49.335063   70908 cri.go:89] found id: ""
	I0311 21:37:49.335092   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.335103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:49.335110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:49.335176   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:49.378374   70908 cri.go:89] found id: ""
	I0311 21:37:49.378399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.378406   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:49.378414   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:49.378427   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.422193   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:49.422220   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:49.474861   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:49.474893   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:49.490193   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:49.490219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:49.571857   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:49.571880   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:49.571895   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:51.149814   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.648033   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.949603   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.949943   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.951963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.531753   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:54.028723   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:52.168934   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:52.183086   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:52.183154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:52.221632   70908 cri.go:89] found id: ""
	I0311 21:37:52.221664   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.221675   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:52.221682   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:52.221743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:52.261550   70908 cri.go:89] found id: ""
	I0311 21:37:52.261575   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.261582   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:52.261588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:52.261638   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:52.302879   70908 cri.go:89] found id: ""
	I0311 21:37:52.302910   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.302920   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:52.302927   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:52.302987   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:52.346462   70908 cri.go:89] found id: ""
	I0311 21:37:52.346485   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.346494   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:52.346499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:52.346551   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:52.387949   70908 cri.go:89] found id: ""
	I0311 21:37:52.387977   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.387988   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:52.387995   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:52.388052   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:52.428527   70908 cri.go:89] found id: ""
	I0311 21:37:52.428564   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.428574   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:52.428582   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:52.428649   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:52.469516   70908 cri.go:89] found id: ""
	I0311 21:37:52.469548   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.469558   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:52.469565   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:52.469616   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:52.508371   70908 cri.go:89] found id: ""
	I0311 21:37:52.508407   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.508417   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:52.508429   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:52.508444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:52.587309   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:52.587346   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:52.587361   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:52.666419   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:52.666449   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:52.713150   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:52.713184   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:52.768011   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:52.768041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.284835   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:55.298742   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:55.298799   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:55.340215   70908 cri.go:89] found id: ""
	I0311 21:37:55.340240   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.340251   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:55.340257   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:55.340321   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:55.377930   70908 cri.go:89] found id: ""
	I0311 21:37:55.377956   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.377967   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:55.377974   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:55.378039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:55.418786   70908 cri.go:89] found id: ""
	I0311 21:37:55.418814   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.418822   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:55.418827   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:55.418883   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:55.461566   70908 cri.go:89] found id: ""
	I0311 21:37:55.461586   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.461593   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:55.461601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:55.461655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:55.502917   70908 cri.go:89] found id: ""
	I0311 21:37:55.502945   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.502955   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:55.502962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:55.503022   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:55.551417   70908 cri.go:89] found id: ""
	I0311 21:37:55.551441   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.551454   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:55.551462   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:55.551514   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:55.596060   70908 cri.go:89] found id: ""
	I0311 21:37:55.596092   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.596103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:55.596111   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:55.596172   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:55.635495   70908 cri.go:89] found id: ""
	I0311 21:37:55.635523   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.635535   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:55.635547   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:55.635564   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:55.691705   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:55.691735   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.707696   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:55.707718   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:55.780432   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:55.780452   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:55.780465   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:55.866033   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:55.866067   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:55.648873   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.452135   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.951150   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.528533   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.529769   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.437299   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:58.453058   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:58.453125   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:58.493317   70908 cri.go:89] found id: ""
	I0311 21:37:58.493339   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.493347   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:58.493353   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:58.493408   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:58.543533   70908 cri.go:89] found id: ""
	I0311 21:37:58.543556   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.543567   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:58.543578   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:58.543634   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:58.585255   70908 cri.go:89] found id: ""
	I0311 21:37:58.585282   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.585292   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:58.585300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:58.585359   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:58.622393   70908 cri.go:89] found id: ""
	I0311 21:37:58.622421   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.622428   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:58.622434   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:58.622501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:58.661939   70908 cri.go:89] found id: ""
	I0311 21:37:58.661963   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.661971   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:58.661977   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:58.662034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:58.703628   70908 cri.go:89] found id: ""
	I0311 21:37:58.703663   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.703674   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:58.703682   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:58.703743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:58.742553   70908 cri.go:89] found id: ""
	I0311 21:37:58.742583   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.742594   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:58.742601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:58.742662   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:58.785016   70908 cri.go:89] found id: ""
	I0311 21:37:58.785040   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.785047   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:58.785055   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:58.785071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:58.857757   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:58.857773   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:58.857786   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:58.946120   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:58.946148   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:58.996288   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:58.996328   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:59.055371   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:59.055407   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:00.651621   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.149663   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:00.951776   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.451012   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.028303   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.028600   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.032276   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.571092   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:01.591149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:01.591238   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:01.629156   70908 cri.go:89] found id: ""
	I0311 21:38:01.629184   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.629196   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:01.629203   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:01.629261   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:01.673656   70908 cri.go:89] found id: ""
	I0311 21:38:01.673680   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.673687   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:01.673692   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:01.673739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:01.713361   70908 cri.go:89] found id: ""
	I0311 21:38:01.713389   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.713397   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:01.713403   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:01.713450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:01.757256   70908 cri.go:89] found id: ""
	I0311 21:38:01.757286   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.757298   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:01.757305   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:01.757362   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:01.797538   70908 cri.go:89] found id: ""
	I0311 21:38:01.797565   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.797573   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:01.797580   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:01.797635   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:01.838664   70908 cri.go:89] found id: ""
	I0311 21:38:01.838692   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.838701   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:01.838707   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:01.838754   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:01.893638   70908 cri.go:89] found id: ""
	I0311 21:38:01.893668   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.893679   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:01.893686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:01.893747   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:01.935547   70908 cri.go:89] found id: ""
	I0311 21:38:01.935569   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.935577   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:01.935585   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:01.935596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:01.989964   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:01.989988   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:02.004949   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:02.004973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:02.082006   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:02.082024   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:02.082041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:02.171040   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:02.171072   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:04.724699   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:04.741445   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:04.741512   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:04.783924   70908 cri.go:89] found id: ""
	I0311 21:38:04.783951   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.783962   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:04.783969   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:04.784028   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:04.825806   70908 cri.go:89] found id: ""
	I0311 21:38:04.825835   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.825845   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:04.825852   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:04.825913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:04.864070   70908 cri.go:89] found id: ""
	I0311 21:38:04.864106   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.864118   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:04.864126   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:04.864181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:04.901735   70908 cri.go:89] found id: ""
	I0311 21:38:04.901759   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.901769   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:04.901777   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:04.901832   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:04.941473   70908 cri.go:89] found id: ""
	I0311 21:38:04.941496   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.941505   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:04.941513   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:04.941569   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:04.993132   70908 cri.go:89] found id: ""
	I0311 21:38:04.993162   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.993170   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:04.993178   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:04.993237   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:05.037925   70908 cri.go:89] found id: ""
	I0311 21:38:05.037950   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.037960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:05.037967   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:05.038026   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:05.080726   70908 cri.go:89] found id: ""
	I0311 21:38:05.080773   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.080784   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:05.080794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:05.080806   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:05.138205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:05.138233   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:05.155048   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:05.155071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:05.233067   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:05.233086   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:05.233099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:05.317897   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:05.317928   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:05.646661   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.647686   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.949900   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.950261   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.528049   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:09.530724   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.863484   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:07.877342   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:07.877411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:07.916352   70908 cri.go:89] found id: ""
	I0311 21:38:07.916374   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.916383   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:07.916391   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:07.916454   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:07.954833   70908 cri.go:89] found id: ""
	I0311 21:38:07.954854   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.954863   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:07.954870   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:07.954926   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:07.993124   70908 cri.go:89] found id: ""
	I0311 21:38:07.993152   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.993161   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:07.993168   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:07.993232   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:08.039081   70908 cri.go:89] found id: ""
	I0311 21:38:08.039108   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.039118   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:08.039125   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:08.039191   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:08.084627   70908 cri.go:89] found id: ""
	I0311 21:38:08.084650   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.084658   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:08.084665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:08.084712   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:08.125986   70908 cri.go:89] found id: ""
	I0311 21:38:08.126015   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.126026   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:08.126034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:08.126080   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:08.167149   70908 cri.go:89] found id: ""
	I0311 21:38:08.167176   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.167188   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:08.167193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:08.167252   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:08.204988   70908 cri.go:89] found id: ""
	I0311 21:38:08.205012   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.205020   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:08.205028   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:08.205043   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:08.295226   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:08.295268   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:08.357789   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:08.357820   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:08.434091   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:08.434132   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:08.455208   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:08.455240   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:08.529620   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.030060   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:09.648047   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.649628   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:13.652370   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:10.450139   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:12.949551   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.531354   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.029703   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.044303   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:11.046353   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:11.088067   70908 cri.go:89] found id: ""
	I0311 21:38:11.088099   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.088110   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:11.088117   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:11.088177   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:11.131077   70908 cri.go:89] found id: ""
	I0311 21:38:11.131104   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.131114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:11.131121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:11.131181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:11.172409   70908 cri.go:89] found id: ""
	I0311 21:38:11.172431   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.172439   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:11.172444   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:11.172496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:11.216775   70908 cri.go:89] found id: ""
	I0311 21:38:11.216817   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.216825   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:11.216830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:11.216886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:11.255105   70908 cri.go:89] found id: ""
	I0311 21:38:11.255129   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.255137   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:11.255142   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:11.255205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:11.292397   70908 cri.go:89] found id: ""
	I0311 21:38:11.292429   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.292440   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:11.292448   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:11.292518   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:11.330376   70908 cri.go:89] found id: ""
	I0311 21:38:11.330397   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.330408   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:11.330415   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:11.330476   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:11.367699   70908 cri.go:89] found id: ""
	I0311 21:38:11.367727   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.367737   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:11.367748   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:11.367763   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:11.421847   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:11.421876   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:11.437570   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:11.437593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:11.522084   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.522108   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:11.522123   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:11.606181   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:11.606228   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.153952   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:14.175726   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:14.175798   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:14.221752   70908 cri.go:89] found id: ""
	I0311 21:38:14.221784   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.221798   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:14.221807   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:14.221895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:14.286690   70908 cri.go:89] found id: ""
	I0311 21:38:14.286720   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.286740   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:14.286757   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:14.286824   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:14.343764   70908 cri.go:89] found id: ""
	I0311 21:38:14.343790   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.343799   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:14.343806   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:14.343876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:14.381198   70908 cri.go:89] found id: ""
	I0311 21:38:14.381220   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.381230   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:14.381237   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:14.381307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:14.421578   70908 cri.go:89] found id: ""
	I0311 21:38:14.421603   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.421613   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:14.421620   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:14.421678   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:14.462945   70908 cri.go:89] found id: ""
	I0311 21:38:14.462972   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.462982   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:14.462990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:14.463049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:14.503503   70908 cri.go:89] found id: ""
	I0311 21:38:14.503532   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.503543   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:14.503550   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:14.503610   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:14.543987   70908 cri.go:89] found id: ""
	I0311 21:38:14.544021   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.544034   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:14.544045   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:14.544062   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:14.624781   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:14.624804   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:14.624821   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:14.707130   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:14.707161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.750815   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:14.750848   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:14.806855   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:14.806882   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:16.149516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.646716   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.949827   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.953660   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.031935   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.529085   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:17.325267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:17.340421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:17.340483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:17.382808   70908 cri.go:89] found id: ""
	I0311 21:38:17.382831   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.382841   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:17.382849   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:17.382906   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:17.424838   70908 cri.go:89] found id: ""
	I0311 21:38:17.424865   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.424875   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:17.424883   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:17.424940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:17.466298   70908 cri.go:89] found id: ""
	I0311 21:38:17.466320   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.466327   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:17.466333   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:17.466397   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:17.506648   70908 cri.go:89] found id: ""
	I0311 21:38:17.506678   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.506685   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:17.506691   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:17.506739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:17.544019   70908 cri.go:89] found id: ""
	I0311 21:38:17.544048   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.544057   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:17.544067   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:17.544154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:17.583691   70908 cri.go:89] found id: ""
	I0311 21:38:17.583710   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.583717   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:17.583723   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:17.583768   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:17.624432   70908 cri.go:89] found id: ""
	I0311 21:38:17.624453   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.624460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:17.624466   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:17.624516   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:17.663253   70908 cri.go:89] found id: ""
	I0311 21:38:17.663294   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.663312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:17.663322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:17.663339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:17.749928   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:17.749962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:17.792817   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:17.792853   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:17.847391   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:17.847419   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:17.862813   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:17.862835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:17.935307   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.435995   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:20.452441   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:20.452510   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:20.491960   70908 cri.go:89] found id: ""
	I0311 21:38:20.491985   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.491992   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:20.491998   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:20.492045   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:20.531679   70908 cri.go:89] found id: ""
	I0311 21:38:20.531700   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.531707   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:20.531712   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:20.531764   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:20.571666   70908 cri.go:89] found id: ""
	I0311 21:38:20.571687   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.571694   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:20.571699   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:20.571762   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:20.611165   70908 cri.go:89] found id: ""
	I0311 21:38:20.611187   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.611194   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:20.611199   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:20.611248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:20.648680   70908 cri.go:89] found id: ""
	I0311 21:38:20.648709   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.648720   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:20.648728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:20.648801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:20.690177   70908 cri.go:89] found id: ""
	I0311 21:38:20.690204   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.690215   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:20.690222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:20.690298   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:20.728918   70908 cri.go:89] found id: ""
	I0311 21:38:20.728949   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.728960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:20.728968   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:20.729039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:20.773559   70908 cri.go:89] found id: ""
	I0311 21:38:20.773586   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.773596   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:20.773607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:20.773623   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:20.788709   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:20.788750   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:20.869832   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.869856   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:20.869868   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:20.963515   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:20.963544   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:21.007029   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:21.007055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:21.147703   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.660410   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:19.449416   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:21.451194   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.950401   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:20.529497   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:22.529947   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:25.030431   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.566134   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:23.583855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:23.583911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:23.623605   70908 cri.go:89] found id: ""
	I0311 21:38:23.623633   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.623656   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:23.623664   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:23.623719   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:23.663058   70908 cri.go:89] found id: ""
	I0311 21:38:23.663081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.663091   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:23.663098   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:23.663157   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:23.701930   70908 cri.go:89] found id: ""
	I0311 21:38:23.701963   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.701975   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:23.701985   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:23.702049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:23.743925   70908 cri.go:89] found id: ""
	I0311 21:38:23.743955   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.743964   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:23.743970   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:23.744046   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:23.784030   70908 cri.go:89] found id: ""
	I0311 21:38:23.784055   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.784066   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:23.784073   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:23.784132   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:23.823054   70908 cri.go:89] found id: ""
	I0311 21:38:23.823081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.823089   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:23.823097   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:23.823156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:23.863629   70908 cri.go:89] found id: ""
	I0311 21:38:23.863654   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.863662   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:23.863668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:23.863724   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:23.904429   70908 cri.go:89] found id: ""
	I0311 21:38:23.904454   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.904462   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:23.904470   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:23.904481   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:23.962356   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:23.962393   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:23.977667   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:23.977689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:24.068791   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:24.068820   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:24.068835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:24.157857   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:24.157892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:26.147447   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.148069   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.450243   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.950495   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:27.530194   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:30.029286   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.705872   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:26.720840   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:26.720936   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:26.766449   70908 cri.go:89] found id: ""
	I0311 21:38:26.766480   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.766490   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:26.766496   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:26.766557   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:26.806179   70908 cri.go:89] found id: ""
	I0311 21:38:26.806203   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.806210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:26.806216   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:26.806275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:26.850737   70908 cri.go:89] found id: ""
	I0311 21:38:26.850765   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.850775   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:26.850785   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:26.850845   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:26.897694   70908 cri.go:89] found id: ""
	I0311 21:38:26.897722   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.897733   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:26.897744   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:26.897802   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:26.940940   70908 cri.go:89] found id: ""
	I0311 21:38:26.940962   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.940969   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:26.940975   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:26.941021   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:26.978576   70908 cri.go:89] found id: ""
	I0311 21:38:26.978604   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.978614   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:26.978625   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:26.978682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:27.016331   70908 cri.go:89] found id: ""
	I0311 21:38:27.016363   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.016374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:27.016381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:27.016439   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:27.061541   70908 cri.go:89] found id: ""
	I0311 21:38:27.061569   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.061580   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:27.061590   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:27.061609   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:27.154977   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:27.155017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:27.204458   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:27.204488   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:27.259960   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:27.259997   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:27.277806   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:27.277832   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:27.356111   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:29.856828   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:29.871331   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:29.871413   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:29.912867   70908 cri.go:89] found id: ""
	I0311 21:38:29.912895   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.912904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:29.912910   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:29.912973   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:29.953458   70908 cri.go:89] found id: ""
	I0311 21:38:29.953483   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.953491   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:29.953497   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:29.953553   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:29.997873   70908 cri.go:89] found id: ""
	I0311 21:38:29.997904   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.997912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:29.997921   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:29.997983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:30.038831   70908 cri.go:89] found id: ""
	I0311 21:38:30.038861   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.038872   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:30.038880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:30.038940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:30.082089   70908 cri.go:89] found id: ""
	I0311 21:38:30.082117   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.082127   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:30.082135   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:30.082213   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:30.121167   70908 cri.go:89] found id: ""
	I0311 21:38:30.121198   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.121209   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:30.121216   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:30.121274   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:30.162342   70908 cri.go:89] found id: ""
	I0311 21:38:30.162371   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.162380   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:30.162393   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:30.162452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:30.201727   70908 cri.go:89] found id: ""
	I0311 21:38:30.201753   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.201761   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:30.201769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:30.201780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:30.283314   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:30.283346   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:30.333900   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:30.333930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:30.391761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:30.391798   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:30.407907   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:30.407930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:30.489560   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:30.646773   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.649048   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:31.456251   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:33.951315   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.529160   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:34.530183   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.989976   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:33.004724   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:33.004814   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:33.049701   70908 cri.go:89] found id: ""
	I0311 21:38:33.049733   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.049743   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:33.049753   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:33.049823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:33.097759   70908 cri.go:89] found id: ""
	I0311 21:38:33.097792   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.097804   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:33.097811   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:33.097875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:33.143257   70908 cri.go:89] found id: ""
	I0311 21:38:33.143291   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.143300   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:33.143308   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:33.143376   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:33.187434   70908 cri.go:89] found id: ""
	I0311 21:38:33.187464   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.187477   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:33.187483   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:33.187558   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:33.236201   70908 cri.go:89] found id: ""
	I0311 21:38:33.236230   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.236239   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:33.236245   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:33.236312   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:33.279710   70908 cri.go:89] found id: ""
	I0311 21:38:33.279783   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.279816   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:33.279830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:33.279898   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:33.325022   70908 cri.go:89] found id: ""
	I0311 21:38:33.325053   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.325064   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:33.325072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:33.325138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:33.368588   70908 cri.go:89] found id: ""
	I0311 21:38:33.368614   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.368622   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:33.368629   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:33.368640   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:33.427761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:33.427801   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:33.444440   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:33.444472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:33.527745   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:33.527764   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:33.527775   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:33.608215   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:33.608248   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:35.146541   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:37.146917   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.450175   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:38.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.531125   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:39.028780   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.158253   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:36.172370   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:36.172438   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:36.216905   70908 cri.go:89] found id: ""
	I0311 21:38:36.216935   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.216945   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:36.216951   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:36.216996   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:36.260844   70908 cri.go:89] found id: ""
	I0311 21:38:36.260875   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.260885   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:36.260890   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:36.260941   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:36.306730   70908 cri.go:89] found id: ""
	I0311 21:38:36.306755   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.306767   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:36.306772   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:36.306820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:36.346957   70908 cri.go:89] found id: ""
	I0311 21:38:36.346993   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.347004   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:36.347012   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:36.347082   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:36.392265   70908 cri.go:89] found id: ""
	I0311 21:38:36.392295   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.392306   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:36.392313   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:36.392379   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:36.433383   70908 cri.go:89] found id: ""
	I0311 21:38:36.433407   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.433414   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:36.433421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:36.433467   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:36.471291   70908 cri.go:89] found id: ""
	I0311 21:38:36.471325   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.471336   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:36.471344   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:36.471411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:36.514662   70908 cri.go:89] found id: ""
	I0311 21:38:36.514688   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.514698   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:36.514708   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:36.514722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:36.533222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:36.533251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:36.616359   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:36.616384   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:36.616400   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:36.719105   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:36.719137   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:36.771125   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:36.771156   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.324847   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:39.341149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:39.341218   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:39.380284   70908 cri.go:89] found id: ""
	I0311 21:38:39.380324   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.380335   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:39.380343   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:39.380407   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:39.429860   70908 cri.go:89] found id: ""
	I0311 21:38:39.429886   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.429894   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:39.429899   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:39.429960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:39.468089   70908 cri.go:89] found id: ""
	I0311 21:38:39.468113   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.468121   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:39.468127   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:39.468188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:39.508589   70908 cri.go:89] found id: ""
	I0311 21:38:39.508617   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.508628   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:39.508636   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:39.508695   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:39.552427   70908 cri.go:89] found id: ""
	I0311 21:38:39.552451   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.552459   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:39.552464   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:39.552511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:39.592586   70908 cri.go:89] found id: ""
	I0311 21:38:39.592607   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.592615   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:39.592621   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:39.592670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:39.637138   70908 cri.go:89] found id: ""
	I0311 21:38:39.637167   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.637178   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:39.637186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:39.637248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:39.679422   70908 cri.go:89] found id: ""
	I0311 21:38:39.679457   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.679470   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:39.679482   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:39.679499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.734815   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:39.734850   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:39.750448   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:39.750472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:39.832912   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:39.832936   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:39.832951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:39.924020   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:39.924061   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:39.648759   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.146226   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:40.950021   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.951344   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:41.528407   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529130   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529166   70458 pod_ready.go:81] duration metric: took 4m0.007627735s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:43.529179   70458 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0311 21:38:43.529188   70458 pod_ready.go:38] duration metric: took 4m4.551429192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:43.529207   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:38:43.529242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:43.529306   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:43.589292   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:43.589314   70458 cri.go:89] found id: ""
	I0311 21:38:43.589323   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:43.589388   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.595182   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:43.595267   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:43.645002   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:43.645027   70458 cri.go:89] found id: ""
	I0311 21:38:43.645036   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:43.645088   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.650463   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:43.650537   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:43.693876   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:43.693894   70458 cri.go:89] found id: ""
	I0311 21:38:43.693902   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:43.693958   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.699273   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:43.699340   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:43.752552   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:43.752585   70458 cri.go:89] found id: ""
	I0311 21:38:43.752596   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:43.752667   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.758307   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:43.758384   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:43.802761   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:43.802789   70458 cri.go:89] found id: ""
	I0311 21:38:43.802798   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:43.802858   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.807796   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:43.807867   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:43.853820   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:43.853843   70458 cri.go:89] found id: ""
	I0311 21:38:43.853851   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:43.853907   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.859377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:43.859451   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:43.910605   70458 cri.go:89] found id: ""
	I0311 21:38:43.910640   70458 logs.go:276] 0 containers: []
	W0311 21:38:43.910648   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:43.910655   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:43.910702   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:43.955602   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:43.955624   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:43.955629   70458 cri.go:89] found id: ""
	I0311 21:38:43.955645   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:43.955713   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.960856   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.965889   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:43.965919   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:44.013879   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:44.013908   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:44.064641   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:44.064669   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:44.118095   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:44.118120   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:44.177775   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:44.177819   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.242090   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:44.242129   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:44.261628   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:44.261665   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:44.322616   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:44.322656   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:44.388117   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:44.388159   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:44.445980   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:44.446018   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:44.980199   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:44.980243   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:45.138312   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:45.138368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:45.208626   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:45.208664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:42.472932   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:42.488034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:42.488090   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:42.530945   70908 cri.go:89] found id: ""
	I0311 21:38:42.530971   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.530981   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:42.530989   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:42.531053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:42.571906   70908 cri.go:89] found id: ""
	I0311 21:38:42.571939   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.571951   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:42.571960   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:42.572029   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:42.613198   70908 cri.go:89] found id: ""
	I0311 21:38:42.613228   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.613239   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:42.613247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:42.613330   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:42.654740   70908 cri.go:89] found id: ""
	I0311 21:38:42.654762   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.654770   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:42.654775   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:42.654821   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:42.694797   70908 cri.go:89] found id: ""
	I0311 21:38:42.694836   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.694847   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:42.694854   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:42.694931   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:42.738918   70908 cri.go:89] found id: ""
	I0311 21:38:42.738946   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.738958   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:42.738965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:42.739032   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:42.780836   70908 cri.go:89] found id: ""
	I0311 21:38:42.780870   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.780881   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:42.780888   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:42.780943   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:42.824672   70908 cri.go:89] found id: ""
	I0311 21:38:42.824701   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.824712   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:42.824721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:42.824747   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:42.877219   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:42.877253   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:42.934996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:42.935033   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:42.952125   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:42.952152   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:43.036657   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:43.036678   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:43.036695   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:45.629959   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:45.648501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:45.648581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:45.690083   70908 cri.go:89] found id: ""
	I0311 21:38:45.690117   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.690128   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:45.690136   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:45.690201   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:45.736497   70908 cri.go:89] found id: ""
	I0311 21:38:45.736519   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.736526   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:45.736531   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:45.736576   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:45.778590   70908 cri.go:89] found id: ""
	I0311 21:38:45.778625   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.778636   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:45.778645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:45.778723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:45.822322   70908 cri.go:89] found id: ""
	I0311 21:38:45.822351   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.822359   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:45.822365   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:45.822419   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:45.868591   70908 cri.go:89] found id: ""
	I0311 21:38:45.868618   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.868627   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:45.868633   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:45.868680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:45.915137   70908 cri.go:89] found id: ""
	I0311 21:38:45.915165   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.915178   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:45.915187   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:45.915258   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:45.960432   70908 cri.go:89] found id: ""
	I0311 21:38:45.960459   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.960469   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:45.960476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:45.960529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:46.006089   70908 cri.go:89] found id: ""
	I0311 21:38:46.006168   70908 logs.go:276] 0 containers: []
	W0311 21:38:46.006185   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:46.006195   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:46.006209   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.153091   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.650654   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:44.951550   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.952791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:47.756629   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:47.776613   70458 api_server.go:72] duration metric: took 4m14.182101385s to wait for apiserver process to appear ...
	I0311 21:38:47.776651   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:38:47.776691   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:47.776774   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:47.826534   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:47.826553   70458 cri.go:89] found id: ""
	I0311 21:38:47.826560   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:47.826609   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.831565   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:47.831637   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:47.876504   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:47.876531   70458 cri.go:89] found id: ""
	I0311 21:38:47.876541   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:47.876598   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.882130   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:47.882224   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:47.930064   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:47.930087   70458 cri.go:89] found id: ""
	I0311 21:38:47.930096   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:47.930139   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.935357   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:47.935433   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:47.989169   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:47.989196   70458 cri.go:89] found id: ""
	I0311 21:38:47.989206   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:47.989262   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.994341   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:47.994401   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:48.037592   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:48.037619   70458 cri.go:89] found id: ""
	I0311 21:38:48.037629   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:48.037692   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.043377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:48.043453   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:48.088629   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.088651   70458 cri.go:89] found id: ""
	I0311 21:38:48.088671   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:48.088722   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.093944   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:48.094016   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:48.144943   70458 cri.go:89] found id: ""
	I0311 21:38:48.144971   70458 logs.go:276] 0 containers: []
	W0311 21:38:48.144983   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:48.144990   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:48.145050   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:48.188857   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:48.188877   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.188881   70458 cri.go:89] found id: ""
	I0311 21:38:48.188887   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:48.188934   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.195123   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.200643   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:48.200673   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.246864   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:48.246894   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:48.715510   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:48.715545   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:48.775676   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:48.775716   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:48.793121   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:48.793157   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:48.863992   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:48.864040   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:48.922775   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:48.922810   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.996820   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:48.996866   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:49.045065   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.045097   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:49.199072   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:49.199137   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:49.283329   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:49.283360   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:49.340461   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:49.340502   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:49.391436   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.391460   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:46.064257   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:46.064296   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:46.080304   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:46.080337   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:46.177978   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:46.178001   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:46.178017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:46.265260   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:46.265298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:48.814221   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:48.835695   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:48.835793   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:48.898391   70908 cri.go:89] found id: ""
	I0311 21:38:48.898418   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.898429   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:48.898437   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:48.898501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:48.972552   70908 cri.go:89] found id: ""
	I0311 21:38:48.972596   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.972607   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:48.972617   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:48.972684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:49.022346   70908 cri.go:89] found id: ""
	I0311 21:38:49.022371   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.022379   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:49.022384   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:49.022430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:49.078415   70908 cri.go:89] found id: ""
	I0311 21:38:49.078444   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.078455   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:49.078463   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:49.078526   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:49.119369   70908 cri.go:89] found id: ""
	I0311 21:38:49.119402   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.119412   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:49.119420   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:49.119497   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:49.169866   70908 cri.go:89] found id: ""
	I0311 21:38:49.169897   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.169908   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:49.169916   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:49.169978   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:49.223619   70908 cri.go:89] found id: ""
	I0311 21:38:49.223642   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.223650   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:49.223656   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:49.223704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:49.278499   70908 cri.go:89] found id: ""
	I0311 21:38:49.278531   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.278542   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:49.278551   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:49.278563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:49.294734   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.294760   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:49.390223   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:49.390252   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:49.390267   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:49.481214   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.481250   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:49.530285   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:49.530321   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:49.149825   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.648269   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.140832   70604 pod_ready.go:81] duration metric: took 4m0.000856291s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:53.140873   70604 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:38:53.140895   70604 pod_ready.go:38] duration metric: took 4m13.032115697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:53.140925   70604 kubeadm.go:591] duration metric: took 4m21.406945055s to restartPrimaryControlPlane
	W0311 21:38:53.140993   70604 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:53.141028   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:49.450738   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.950491   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.952209   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.955522   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:38:51.961814   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:38:51.963188   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:38:51.963209   70458 api_server.go:131] duration metric: took 4.186550701s to wait for apiserver health ...
	I0311 21:38:51.963218   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:38:51.963242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:51.963294   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.020708   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.020727   70458 cri.go:89] found id: ""
	I0311 21:38:52.020746   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:52.020815   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.026606   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.026668   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.072045   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.072063   70458 cri.go:89] found id: ""
	I0311 21:38:52.072071   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:52.072130   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.078592   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.078771   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.139445   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:52.139480   70458 cri.go:89] found id: ""
	I0311 21:38:52.139490   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:52.139548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.148641   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.148724   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.199332   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.199360   70458 cri.go:89] found id: ""
	I0311 21:38:52.199371   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:52.199433   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.207033   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.207096   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.267514   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.267540   70458 cri.go:89] found id: ""
	I0311 21:38:52.267549   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:52.267615   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.274048   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.274132   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.330293   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:52.330324   70458 cri.go:89] found id: ""
	I0311 21:38:52.330334   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:52.330395   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.336062   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.336143   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.381909   70458 cri.go:89] found id: ""
	I0311 21:38:52.381941   70458 logs.go:276] 0 containers: []
	W0311 21:38:52.381952   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.381960   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:52.382026   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:52.441879   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.441908   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.441919   70458 cri.go:89] found id: ""
	I0311 21:38:52.441928   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:52.441988   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.449288   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.456632   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.456664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.526327   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.526368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.545008   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.545035   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:52.699959   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:52.699995   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.762045   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:52.762079   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.828963   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:52.829005   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.874202   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:52.874237   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.916842   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:52.916872   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.969778   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.969807   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:53.365097   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:53.365147   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:53.446533   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:53.446576   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:53.500017   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:53.500043   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:53.572904   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:53.572954   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.087848   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:52.108284   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:52.108351   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.161648   70908 cri.go:89] found id: ""
	I0311 21:38:52.161680   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.161691   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:52.161698   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.161763   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.206552   70908 cri.go:89] found id: ""
	I0311 21:38:52.206577   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.206588   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:52.206596   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.206659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.253954   70908 cri.go:89] found id: ""
	I0311 21:38:52.253984   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.253996   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:52.254004   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.254068   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.302343   70908 cri.go:89] found id: ""
	I0311 21:38:52.302384   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.302396   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:52.302404   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.302472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.345581   70908 cri.go:89] found id: ""
	I0311 21:38:52.345608   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.345618   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:52.345624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.345683   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.392502   70908 cri.go:89] found id: ""
	I0311 21:38:52.392531   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.392542   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:52.392549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.392601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.447625   70908 cri.go:89] found id: ""
	I0311 21:38:52.447651   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.447661   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.447668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:52.447728   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:52.490965   70908 cri.go:89] found id: ""
	I0311 21:38:52.490994   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.491007   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:52.491019   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:52.491034   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:52.539604   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.539650   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.597735   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.597771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.617572   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.617610   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:52.706724   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:52.706753   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.706769   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.293550   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:55.313904   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:55.314005   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:55.368607   70908 cri.go:89] found id: ""
	I0311 21:38:55.368639   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.368647   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:55.368654   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:55.368714   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:55.434052   70908 cri.go:89] found id: ""
	I0311 21:38:55.434081   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.434092   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:55.434100   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:55.434189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:55.483532   70908 cri.go:89] found id: ""
	I0311 21:38:55.483562   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.483572   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:55.483579   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:55.483647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:55.528681   70908 cri.go:89] found id: ""
	I0311 21:38:55.528708   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.528721   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:55.528728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:55.528825   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:55.583143   70908 cri.go:89] found id: ""
	I0311 21:38:55.583167   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.583174   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:55.583179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:55.583240   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:55.636577   70908 cri.go:89] found id: ""
	I0311 21:38:55.636599   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.636607   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:55.636612   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:55.636670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:55.697268   70908 cri.go:89] found id: ""
	I0311 21:38:55.697295   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.697306   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:55.697314   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:55.697374   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:55.749272   70908 cri.go:89] found id: ""
	I0311 21:38:55.749302   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.749312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:55.749322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:55.749335   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.841581   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:55.841643   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:55.898537   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:55.898574   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:55.973278   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:55.973329   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:55.992958   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:55.992986   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:56.137313   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:38:56.137347   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.137354   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.137361   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.137366   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.137371   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.137375   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.137383   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.137390   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.137400   70458 system_pods.go:74] duration metric: took 4.174175629s to wait for pod list to return data ...
	I0311 21:38:56.137409   70458 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:38:56.140315   70458 default_sa.go:45] found service account: "default"
	I0311 21:38:56.140344   70458 default_sa.go:55] duration metric: took 2.92722ms for default service account to be created ...
	I0311 21:38:56.140356   70458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:38:56.146873   70458 system_pods.go:86] 8 kube-system pods found
	I0311 21:38:56.146912   70458 system_pods.go:89] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.146923   70458 system_pods.go:89] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.146932   70458 system_pods.go:89] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.146940   70458 system_pods.go:89] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.146945   70458 system_pods.go:89] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.146951   70458 system_pods.go:89] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.146960   70458 system_pods.go:89] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.146972   70458 system_pods.go:89] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.146983   70458 system_pods.go:126] duration metric: took 6.619737ms to wait for k8s-apps to be running ...
	I0311 21:38:56.146998   70458 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:38:56.147056   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:56.165354   70458 system_svc.go:56] duration metric: took 18.346754ms WaitForService to wait for kubelet
	I0311 21:38:56.165387   70458 kubeadm.go:576] duration metric: took 4m22.570894549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:38:56.165413   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:38:56.168819   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:38:56.168845   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:38:56.168856   70458 node_conditions.go:105] duration metric: took 3.437527ms to run NodePressure ...
	I0311 21:38:56.168868   70458 start.go:240] waiting for startup goroutines ...
	I0311 21:38:56.168875   70458 start.go:245] waiting for cluster config update ...
	I0311 21:38:56.168885   70458 start.go:254] writing updated cluster config ...
	I0311 21:38:56.169153   70458 ssh_runner.go:195] Run: rm -f paused
	I0311 21:38:56.225977   70458 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:38:56.228234   70458 out.go:177] * Done! kubectl is now configured to use "no-preload-324578" cluster and "default" namespace by default
	I0311 21:38:56.450729   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:58.450799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	W0311 21:38:56.084193   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:58.584354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:58.604767   70908 kubeadm.go:591] duration metric: took 4m4.440744932s to restartPrimaryControlPlane
	W0311 21:38:58.604844   70908 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:58.604872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:59.965834   70908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.36094005s)
	I0311 21:38:59.965906   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:59.982020   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:38:59.994794   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:00.007116   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:00.007138   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:00.007182   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:00.019744   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:00.019802   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:00.033311   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:00.045608   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:00.045685   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:00.059722   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.071140   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:00.071199   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.082635   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:00.093311   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:00.093374   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:00.104995   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:00.372164   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:00.950799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:03.450080   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:05.949899   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:07.950640   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:10.450583   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:12.949481   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:14.950496   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:16.951064   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:18.958165   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:21.450609   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:23.949791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:26.302837   70604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.161781704s)
	I0311 21:39:26.302921   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:26.319602   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:39:26.331483   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:26.343632   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:26.343658   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:26.343705   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:26.354863   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:26.354919   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:26.366087   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:26.377221   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:26.377282   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:26.389769   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.401201   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:26.401255   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.412357   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:26.423962   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:26.424035   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:26.436189   70604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:26.672030   70604 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:25.952857   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:28.449272   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:30.450630   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:32.450912   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:35.908605   70604 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:39:35.908656   70604 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:39:35.908751   70604 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:39:35.908846   70604 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:39:35.908967   70604 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:39:35.909026   70604 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:39:35.910690   70604 out.go:204]   - Generating certificates and keys ...
	I0311 21:39:35.910785   70604 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:39:35.910849   70604 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:39:35.910952   70604 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:39:35.911039   70604 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:39:35.911106   70604 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:39:35.911177   70604 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:39:35.911268   70604 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:39:35.911353   70604 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:39:35.911449   70604 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:39:35.911551   70604 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:39:35.911604   70604 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:39:35.911689   70604 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:39:35.911762   70604 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:39:35.911869   70604 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:39:35.911974   70604 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:39:35.912067   70604 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:39:35.912217   70604 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:39:35.912320   70604 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:39:35.914908   70604 out.go:204]   - Booting up control plane ...
	I0311 21:39:35.915026   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:39:35.915126   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:39:35.915216   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:39:35.915321   70604 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:39:35.915431   70604 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:39:35.915487   70604 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:39:35.915659   70604 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:39:35.915792   70604 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503325 seconds
	I0311 21:39:35.915925   70604 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:39:35.916039   70604 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:39:35.916091   70604 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:39:35.916314   70604 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-743937 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:39:35.916408   70604 kubeadm.go:309] [bootstrap-token] Using token: hxeoeg.f2scq51qa57vwzwt
	I0311 21:39:35.917880   70604 out.go:204]   - Configuring RBAC rules ...
	I0311 21:39:35.917995   70604 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:39:35.918093   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:39:35.918297   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:39:35.918490   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:39:35.918629   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:39:35.918745   70604 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:39:35.918907   70604 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:39:35.918974   70604 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:39:35.919031   70604 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:39:35.919048   70604 kubeadm.go:309] 
	I0311 21:39:35.919118   70604 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:39:35.919128   70604 kubeadm.go:309] 
	I0311 21:39:35.919225   70604 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:39:35.919236   70604 kubeadm.go:309] 
	I0311 21:39:35.919266   70604 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:39:35.919344   70604 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:39:35.919405   70604 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:39:35.919412   70604 kubeadm.go:309] 
	I0311 21:39:35.919461   70604 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:39:35.919467   70604 kubeadm.go:309] 
	I0311 21:39:35.919505   70604 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:39:35.919511   70604 kubeadm.go:309] 
	I0311 21:39:35.919553   70604 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:39:35.919640   70604 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:39:35.919727   70604 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:39:35.919736   70604 kubeadm.go:309] 
	I0311 21:39:35.919835   70604 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:39:35.919949   70604 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:39:35.919964   70604 kubeadm.go:309] 
	I0311 21:39:35.920071   70604 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920172   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:39:35.920193   70604 kubeadm.go:309] 	--control-plane 
	I0311 21:39:35.920199   70604 kubeadm.go:309] 
	I0311 21:39:35.920271   70604 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:39:35.920280   70604 kubeadm.go:309] 
	I0311 21:39:35.920349   70604 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920479   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:39:35.920507   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:39:35.920517   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:39:35.922125   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:39:35.923386   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:39:35.955828   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:39:36.065309   70604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:39:36.065389   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.065408   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-743937 minikube.k8s.io/updated_at=2024_03_11T21_39_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=embed-certs-743937 minikube.k8s.io/primary=true
	I0311 21:39:36.370945   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.370961   70604 ops.go:34] apiserver oom_adj: -16
	I0311 21:39:36.871194   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.371937   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.871974   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.371330   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.871791   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:34.949300   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:36.942990   70417 pod_ready.go:81] duration metric: took 4m0.000574155s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	E0311 21:39:36.943022   70417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:39:36.943043   70417 pod_ready.go:38] duration metric: took 4m12.043798271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:36.943093   70417 kubeadm.go:591] duration metric: took 4m20.121624644s to restartPrimaryControlPlane
	W0311 21:39:36.943155   70417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:39:36.943183   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:39:39.371531   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:39.872032   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.371717   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.871615   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.371577   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.871841   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.371050   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.871044   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.371446   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.871815   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.371243   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.872056   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.371993   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.871213   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.371397   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.871185   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.371541   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.871121   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.971855   70604 kubeadm.go:1106] duration metric: took 11.906533451s to wait for elevateKubeSystemPrivileges
	W0311 21:39:47.971895   70604 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:39:47.971902   70604 kubeadm.go:393] duration metric: took 5m16.305518086s to StartCluster
	I0311 21:39:47.971917   70604 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.972003   70604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:39:47.974339   70604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.974576   70604 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:39:47.976309   70604 out.go:177] * Verifying Kubernetes components...
	I0311 21:39:47.974638   70604 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:39:47.974819   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:39:47.977737   70604 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-743937"
	I0311 21:39:47.977746   70604 addons.go:69] Setting default-storageclass=true in profile "embed-certs-743937"
	I0311 21:39:47.977779   70604 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-743937"
	W0311 21:39:47.977790   70604 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:39:47.977815   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.977740   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:39:47.977779   70604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-743937"
	I0311 21:39:47.977750   70604 addons.go:69] Setting metrics-server=true in profile "embed-certs-743937"
	I0311 21:39:47.977943   70604 addons.go:234] Setting addon metrics-server=true in "embed-certs-743937"
	W0311 21:39:47.977957   70604 addons.go:243] addon metrics-server should already be in state true
	I0311 21:39:47.977985   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978270   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978275   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978419   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978449   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.994019   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I0311 21:39:47.994131   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0311 21:39:47.994484   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994514   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994964   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.994983   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995128   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.995143   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995288   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0311 21:39:47.995437   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995506   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995583   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:47.996051   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.996073   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.996516   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.996999   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.997024   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.997383   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.997834   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.997858   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.999381   70604 addons.go:234] Setting addon default-storageclass=true in "embed-certs-743937"
	W0311 21:39:47.999406   70604 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:39:47.999432   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.999794   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.999823   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.012063   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0311 21:39:48.012470   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.012899   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.012923   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.013267   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.013334   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0311 21:39:48.013484   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.013767   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.014259   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.014279   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.014556   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.014752   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.015486   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.017650   70604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:39:48.016591   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.019717   70604 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.019736   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:39:48.019758   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.021823   70604 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:39:48.023083   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:39:48.023095   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:39:48.023108   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.023306   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.023589   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0311 21:39:48.023916   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.023937   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.024255   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.024412   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.024533   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.024653   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.025517   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.025955   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.025967   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.026292   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.027365   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.027654   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:48.027692   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.027909   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.027965   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.028188   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.028369   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.028496   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.028603   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.048933   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0311 21:39:48.049338   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.049918   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.049929   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.050342   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.050502   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.052274   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.052523   70604 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.052537   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:39:48.052554   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.055438   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.055864   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.055881   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.056156   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.056334   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.056495   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.056608   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.175402   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:39:48.196199   70604 node_ready.go:35] waiting up to 6m0s for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215911   70604 node_ready.go:49] node "embed-certs-743937" has status "Ready":"True"
	I0311 21:39:48.215935   70604 node_ready.go:38] duration metric: took 19.701474ms for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215945   70604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.223525   70604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228887   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.228907   70604 pod_ready.go:81] duration metric: took 5.35597ms for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228917   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233811   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.233828   70604 pod_ready.go:81] duration metric: took 4.904721ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233839   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241831   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.241848   70604 pod_ready.go:81] duration metric: took 8.002663ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241857   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247609   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.247633   70604 pod_ready.go:81] duration metric: took 5.767693ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247641   70604 pod_ready.go:38] duration metric: took 31.680305ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.247656   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:39:48.247704   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:39:48.270201   70604 api_server.go:72] duration metric: took 295.596568ms to wait for apiserver process to appear ...
	I0311 21:39:48.270224   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:39:48.270242   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:39:48.277642   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:39:48.280487   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:39:48.280505   70604 api_server.go:131] duration metric: took 10.273204ms to wait for apiserver health ...
	I0311 21:39:48.280514   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:39:48.343718   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.346848   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:39:48.346864   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:39:48.400878   70604 system_pods.go:59] 4 kube-system pods found
	I0311 21:39:48.400907   70604 system_pods.go:61] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.400913   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.400919   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.400923   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.400931   70604 system_pods.go:74] duration metric: took 120.410888ms to wait for pod list to return data ...
	I0311 21:39:48.400940   70604 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:39:48.401062   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:39:48.401083   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:39:48.406115   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.492018   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.492042   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:39:48.581187   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.602016   70604 default_sa.go:45] found service account: "default"
	I0311 21:39:48.602046   70604 default_sa.go:55] duration metric: took 201.097662ms for default service account to be created ...
	I0311 21:39:48.602056   70604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:39:48.862115   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:48.862148   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending
	I0311 21:39:48.862155   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending
	I0311 21:39:48.862159   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.862164   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.862169   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.862176   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:48.862180   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.862199   70604 retry.go:31] will retry after 266.08114ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.139648   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.139675   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139682   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139689   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.139694   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.139700   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.139706   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.139710   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.139724   70604 retry.go:31] will retry after 293.420416ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.476384   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.476411   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476418   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476423   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.476429   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.476433   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.476438   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.476442   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.476456   70604 retry.go:31] will retry after 439.10065ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.927298   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.927337   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927348   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927357   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.927366   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.927373   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.927381   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.927389   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.927411   70604 retry.go:31] will retry after 396.604462ms: missing components: kube-dns, kube-proxy
	I0311 21:39:50.092631   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68647s)
	I0311 21:39:50.092698   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.092718   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093147   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093200   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093223   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093241   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093280   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.749522465s)
	I0311 21:39:50.093321   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093336   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093507   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093529   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093746   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.093759   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093773   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093797   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093805   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.094040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.094041   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.094067   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.111807   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.111831   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.112109   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.112127   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.112132   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.291598   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.710367476s)
	I0311 21:39:50.291651   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.291671   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292020   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292036   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292044   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.292050   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292287   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.292328   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292352   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292367   70604 addons.go:470] Verifying addon metrics-server=true in "embed-certs-743937"
	I0311 21:39:50.294192   70604 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:39:50.295405   70604 addons.go:505] duration metric: took 2.320766016s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:39:50.339623   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:50.339651   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339658   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339665   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:50.339671   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:50.339677   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:50.339682   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:50.339688   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:50.339695   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:50.339704   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:50.339728   70604 retry.go:31] will retry after 674.573171ms: missing components: kube-dns
	I0311 21:39:51.021666   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.021704   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021716   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021723   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.021731   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.021743   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.021754   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.021760   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.021772   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.021786   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:51.021805   70604 retry.go:31] will retry after 716.470399ms: missing components: kube-dns
	I0311 21:39:51.745786   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.745818   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745829   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745840   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.745849   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.745855   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.745861   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.745867   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.745876   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.745886   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:51.745904   70604 retry.go:31] will retry after 873.920018ms: missing components: kube-dns
	I0311 21:39:52.627896   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:52.627922   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Running
	I0311 21:39:52.627927   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Running
	I0311 21:39:52.627932   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:52.627936   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:52.627941   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:52.627944   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:52.627948   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:52.627954   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:52.627958   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:52.627966   70604 system_pods.go:126] duration metric: took 4.025903884s to wait for k8s-apps to be running ...
	I0311 21:39:52.627976   70604 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:39:52.628017   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:52.643356   70604 system_svc.go:56] duration metric: took 15.371853ms WaitForService to wait for kubelet
	I0311 21:39:52.643378   70604 kubeadm.go:576] duration metric: took 4.668777182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:39:52.643394   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:39:52.646844   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:39:52.646862   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:39:52.646871   70604 node_conditions.go:105] duration metric: took 3.47245ms to run NodePressure ...
	I0311 21:39:52.646881   70604 start.go:240] waiting for startup goroutines ...
	I0311 21:39:52.646891   70604 start.go:245] waiting for cluster config update ...
	I0311 21:39:52.646904   70604 start.go:254] writing updated cluster config ...
	I0311 21:39:52.647207   70604 ssh_runner.go:195] Run: rm -f paused
	I0311 21:39:52.697687   70604 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:39:52.699641   70604 out.go:177] * Done! kubectl is now configured to use "embed-certs-743937" cluster and "default" namespace by default
	I0311 21:40:09.411155   70417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.467938624s)
	I0311 21:40:09.411245   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:09.429951   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:40:09.442265   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:09.453883   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:09.453899   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:09.453934   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:40:09.465106   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:09.465161   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:09.476155   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:40:09.487366   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:09.487413   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:09.497877   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.508056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:09.508096   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.518709   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:40:09.529005   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:09.529039   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:09.539755   70417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:09.601265   70417 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:40:09.601399   70417 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:09.771387   70417 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:09.771548   70417 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:09.771653   70417 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:10.016610   70417 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:10.018526   70417 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:10.018613   70417 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:10.018670   70417 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:10.018752   70417 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:10.018830   70417 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:10.018926   70417 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:10.019019   70417 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:10.019436   70417 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:10.019924   70417 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:10.020435   70417 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:10.020949   70417 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:10.021470   70417 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:10.021550   70417 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:10.087827   70417 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:10.326702   70417 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:10.515476   70417 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:10.585573   70417 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:10.586277   70417 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:10.588784   70417 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:10.590786   70417 out.go:204]   - Booting up control plane ...
	I0311 21:40:10.590969   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:10.591080   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:10.591164   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:10.613086   70417 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:10.613187   70417 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:10.613224   70417 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:10.753737   70417 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:17.258016   70417 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503151 seconds
	I0311 21:40:17.258170   70417 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:40:17.276142   70417 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:40:17.805116   70417 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:40:17.805383   70417 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-766430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:40:18.323836   70417 kubeadm.go:309] [bootstrap-token] Using token: 9sjslg.sf5b1bfk3wp77z35
	I0311 21:40:18.325382   70417 out.go:204]   - Configuring RBAC rules ...
	I0311 21:40:18.325478   70417 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:40:18.331585   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:40:18.344341   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:40:18.348362   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:40:18.352181   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:40:18.363299   70417 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:40:18.377835   70417 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:40:18.612013   70417 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:40:18.755215   70417 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:40:18.755235   70417 kubeadm.go:309] 
	I0311 21:40:18.755300   70417 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:40:18.755314   70417 kubeadm.go:309] 
	I0311 21:40:18.755434   70417 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:40:18.755460   70417 kubeadm.go:309] 
	I0311 21:40:18.755490   70417 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:40:18.755571   70417 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:40:18.755636   70417 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:40:18.755647   70417 kubeadm.go:309] 
	I0311 21:40:18.755721   70417 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:40:18.755731   70417 kubeadm.go:309] 
	I0311 21:40:18.755794   70417 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:40:18.755804   70417 kubeadm.go:309] 
	I0311 21:40:18.755876   70417 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:40:18.755941   70417 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:40:18.756010   70417 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:40:18.756029   70417 kubeadm.go:309] 
	I0311 21:40:18.756152   70417 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:40:18.756267   70417 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:40:18.756277   70417 kubeadm.go:309] 
	I0311 21:40:18.756391   70417 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.756533   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:40:18.756578   70417 kubeadm.go:309] 	--control-plane 
	I0311 21:40:18.756585   70417 kubeadm.go:309] 
	I0311 21:40:18.756695   70417 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:40:18.756706   70417 kubeadm.go:309] 
	I0311 21:40:18.756844   70417 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.757021   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:40:18.759444   70417 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:40:18.759474   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:40:18.759489   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:40:18.761354   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:40:18.762676   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:40:18.793496   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:40:18.840426   70417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-766430 minikube.k8s.io/updated_at=2024_03_11T21_40_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=default-k8s-diff-port-766430 minikube.k8s.io/primary=true
	I0311 21:40:19.150012   70417 ops.go:34] apiserver oom_adj: -16
	I0311 21:40:19.150129   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:19.650947   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.150969   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.650687   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.150849   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.650356   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.150737   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.650225   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.150390   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.650650   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.151081   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.650689   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.150428   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.650265   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.150198   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.650610   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.150325   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.650794   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.150855   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.650819   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.150345   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.650746   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.150910   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.650742   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.790472   70417 kubeadm.go:1106] duration metric: took 11.95003413s to wait for elevateKubeSystemPrivileges
	W0311 21:40:30.790506   70417 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:40:30.790513   70417 kubeadm.go:393] duration metric: took 5m14.024392605s to StartCluster
	I0311 21:40:30.790527   70417 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.790630   70417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:40:30.792582   70417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.792843   70417 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:40:30.794425   70417 out.go:177] * Verifying Kubernetes components...
	I0311 21:40:30.792920   70417 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:40:30.793051   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:40:30.796119   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:40:30.796129   70417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796160   70417 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796171   70417 addons.go:243] addon metrics-server should already be in state true
	I0311 21:40:30.796197   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796121   70417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796127   70417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796237   70417 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796253   70417 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:40:30.796268   70417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766430"
	I0311 21:40:30.796278   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796663   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796694   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796699   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796722   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796777   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796807   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.812156   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0311 21:40:30.812601   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.813108   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.813138   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.813532   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.813995   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.814031   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.816427   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0311 21:40:30.816626   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0311 21:40:30.816863   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817015   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817365   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817385   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817532   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817557   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817905   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.817908   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.818696   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.819070   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.819100   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.822839   70417 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.822858   70417 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:40:30.822885   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.823188   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.823202   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.834007   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0311 21:40:30.834521   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.835017   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.835033   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.835418   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.835620   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.837838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.839548   70417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:40:30.838397   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0311 21:40:30.840244   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0311 21:40:30.840869   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:40:30.840885   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:40:30.840904   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.841295   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841345   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841877   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.841894   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.841994   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.842012   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.842246   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.842448   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842960   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.842985   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.844184   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.844406   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.845769   70417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:40:30.847105   70417 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:30.844838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.847124   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:40:30.847142   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.845110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.847151   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.847302   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.847424   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.847550   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.849856   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850205   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.850232   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.850575   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.850697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.850835   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.861464   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0311 21:40:30.861799   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.862252   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.862271   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.862655   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.862818   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.864692   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.864956   70417 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:30.864978   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:40:30.864996   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.867548   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.867980   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.868013   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.868140   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.868300   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.868433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.868558   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:31.037958   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:40:31.081173   70417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103697   70417 node_ready.go:49] node "default-k8s-diff-port-766430" has status "Ready":"True"
	I0311 21:40:31.103717   70417 node_ready.go:38] duration metric: took 22.519334ms for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103726   70417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:31.129595   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:31.184749   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:40:31.184771   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:40:31.194340   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:31.213567   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:40:31.213589   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:40:31.255647   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.255667   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:40:31.284917   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.309356   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:32.792293   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.597920266s)
	I0311 21:40:32.792337   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.792625   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.792686   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.792703   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.792714   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792724   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.793060   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.793086   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.793137   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811230   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.811254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.811583   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811587   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.811606   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.156126   70417 pod_ready.go:92] pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.156148   70417 pod_ready.go:81] duration metric: took 2.026531002s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.156156   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174226   70417 pod_ready.go:92] pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.174248   70417 pod_ready.go:81] duration metric: took 18.0858ms for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174257   70417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186296   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.186329   70417 pod_ready.go:81] duration metric: took 12.06396ms for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186344   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195902   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.195930   70417 pod_ready.go:81] duration metric: took 9.577334ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195945   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203134   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.203160   70417 pod_ready.go:81] duration metric: took 7.205172ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203174   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.449290   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.164324973s)
	I0311 21:40:33.449341   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.139948099s)
	I0311 21:40:33.449374   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449392   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449346   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449461   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449662   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449678   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449688   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.449795   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449810   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449823   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449836   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449886   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449905   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450213   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450256   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.450263   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.450272   70417 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-766430"
	I0311 21:40:33.453444   70417 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0311 21:40:33.454670   70417 addons.go:505] duration metric: took 2.661756652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0311 21:40:33.534893   70417 pod_ready.go:92] pod "kube-proxy-t4fwc" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.534915   70417 pod_ready.go:81] duration metric: took 331.733613ms for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.534924   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933950   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.933973   70417 pod_ready.go:81] duration metric: took 399.042085ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933981   70417 pod_ready.go:38] duration metric: took 2.830245804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:33.933994   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:40:33.934053   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:40:33.953607   70417 api_server.go:72] duration metric: took 3.160728268s to wait for apiserver process to appear ...
	I0311 21:40:33.953629   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:40:33.953650   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:40:33.959064   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:40:33.960101   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:40:33.960125   70417 api_server.go:131] duration metric: took 6.489682ms to wait for apiserver health ...
	I0311 21:40:33.960135   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:40:34.137026   70417 system_pods.go:59] 9 kube-system pods found
	I0311 21:40:34.137061   70417 system_pods.go:61] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.137079   70417 system_pods.go:61] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.137086   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.137093   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.137100   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.137105   70417 system_pods.go:61] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.137111   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.137121   70417 system_pods.go:61] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.137133   70417 system_pods.go:61] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.137147   70417 system_pods.go:74] duration metric: took 177.004603ms to wait for pod list to return data ...
	I0311 21:40:34.137201   70417 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:40:34.333563   70417 default_sa.go:45] found service account: "default"
	I0311 21:40:34.333589   70417 default_sa.go:55] duration metric: took 196.374123ms for default service account to be created ...
	I0311 21:40:34.333600   70417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:40:34.537376   70417 system_pods.go:86] 9 kube-system pods found
	I0311 21:40:34.537401   70417 system_pods.go:89] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.537406   70417 system_pods.go:89] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.537411   70417 system_pods.go:89] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.537415   70417 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.537420   70417 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.537423   70417 system_pods.go:89] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.537427   70417 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.537433   70417 system_pods.go:89] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.537438   70417 system_pods.go:89] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.537447   70417 system_pods.go:126] duration metric: took 203.840784ms to wait for k8s-apps to be running ...
	I0311 21:40:34.537453   70417 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:40:34.537493   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:34.555483   70417 system_svc.go:56] duration metric: took 18.021595ms WaitForService to wait for kubelet
	I0311 21:40:34.555511   70417 kubeadm.go:576] duration metric: took 3.76263503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:40:34.555534   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:40:34.735214   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:40:34.735238   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:40:34.735248   70417 node_conditions.go:105] duration metric: took 179.707447ms to run NodePressure ...
	I0311 21:40:34.735258   70417 start.go:240] waiting for startup goroutines ...
	I0311 21:40:34.735264   70417 start.go:245] waiting for cluster config update ...
	I0311 21:40:34.735274   70417 start.go:254] writing updated cluster config ...
	I0311 21:40:34.735539   70417 ssh_runner.go:195] Run: rm -f paused
	I0311 21:40:34.782710   70417 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:40:34.784627   70417 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-766430" cluster and "default" namespace by default
	I0311 21:40:56.380462   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:40:56.380539   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:40:56.382217   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:56.382264   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:56.382349   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:56.382450   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:56.382619   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:56.382712   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:56.384498   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:56.384579   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:56.384636   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:56.384766   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:56.384863   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:56.384967   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:56.385037   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:56.385139   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:56.385208   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:56.385281   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:56.385357   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:56.385408   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:56.385492   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:56.385567   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:56.385644   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:56.385769   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:56.385855   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:56.385962   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:56.386053   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:56.386104   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:56.386184   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:56.387594   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:56.387671   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:56.387738   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:56.387811   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:56.387914   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:56.388107   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:56.388182   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:40:56.388297   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388522   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388614   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388844   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388914   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389074   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389131   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389314   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389405   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389594   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389603   70908 kubeadm.go:309] 
	I0311 21:40:56.389653   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:40:56.389720   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:40:56.389732   70908 kubeadm.go:309] 
	I0311 21:40:56.389779   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:40:56.389811   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:40:56.389924   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:40:56.389933   70908 kubeadm.go:309] 
	I0311 21:40:56.390058   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:40:56.390109   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:40:56.390150   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:40:56.390159   70908 kubeadm.go:309] 
	I0311 21:40:56.390299   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:40:56.390395   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:40:56.390409   70908 kubeadm.go:309] 
	I0311 21:40:56.390512   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:40:56.390603   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:40:56.390702   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:40:56.390803   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:40:56.390833   70908 kubeadm.go:309] 
	W0311 21:40:56.390936   70908 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:40:56.390995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:40:56.941058   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:56.958276   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:56.970464   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:56.970493   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:56.970552   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:40:56.983314   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:56.983372   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:56.993791   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:40:57.004040   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:57.004098   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:57.014471   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.024751   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:57.024805   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.035389   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:40:57.045511   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:57.045556   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:57.056774   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:57.140620   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:57.140789   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:57.310076   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:57.310193   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:57.310280   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:57.506834   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:57.509261   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:57.509362   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:57.509446   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:57.509576   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:57.509669   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:57.509765   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:57.509839   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:57.509949   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:57.510004   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:57.510109   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:57.510231   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:57.510274   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:57.510361   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:57.585562   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:57.644460   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:57.784382   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:57.848952   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:57.867302   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:57.867791   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:57.867864   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:58.036523   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:58.039051   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:58.039176   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:58.054234   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:58.055548   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:58.057378   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:58.060167   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:41:38.062360   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:41:38.062886   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:38.063137   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:43.063592   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:43.063788   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:53.064505   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:53.064773   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:13.065744   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:13.065995   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.066718   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:53.067030   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.067070   70908 kubeadm.go:309] 
	I0311 21:42:53.067135   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:42:53.067191   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:42:53.067203   70908 kubeadm.go:309] 
	I0311 21:42:53.067259   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:42:53.067318   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:42:53.067456   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:42:53.067466   70908 kubeadm.go:309] 
	I0311 21:42:53.067590   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:42:53.067650   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:42:53.067724   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:42:53.067735   70908 kubeadm.go:309] 
	I0311 21:42:53.067889   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:42:53.068021   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:42:53.068036   70908 kubeadm.go:309] 
	I0311 21:42:53.068169   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:42:53.068297   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:42:53.068412   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:42:53.068512   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:42:53.068523   70908 kubeadm.go:309] 
	I0311 21:42:53.069455   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:42:53.069572   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:42:53.069682   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:42:53.069775   70908 kubeadm.go:393] duration metric: took 7m58.960224884s to StartCluster
	I0311 21:42:53.069833   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:42:53.069899   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:42:53.120459   70908 cri.go:89] found id: ""
	I0311 21:42:53.120486   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.120497   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:42:53.120505   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:42:53.120564   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:42:53.159639   70908 cri.go:89] found id: ""
	I0311 21:42:53.159667   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.159676   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:42:53.159682   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:42:53.159738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:42:53.199584   70908 cri.go:89] found id: ""
	I0311 21:42:53.199607   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.199614   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:42:53.199619   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:42:53.199676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:42:53.238868   70908 cri.go:89] found id: ""
	I0311 21:42:53.238901   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.238908   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:42:53.238917   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:42:53.238963   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:42:53.282172   70908 cri.go:89] found id: ""
	I0311 21:42:53.282205   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.282216   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:42:53.282225   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:42:53.282278   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:42:53.318450   70908 cri.go:89] found id: ""
	I0311 21:42:53.318481   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.318491   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:42:53.318499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:42:53.318559   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:42:53.360887   70908 cri.go:89] found id: ""
	I0311 21:42:53.360913   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.360923   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:42:53.360930   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:42:53.361027   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:42:53.414181   70908 cri.go:89] found id: ""
	I0311 21:42:53.414209   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.414220   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:42:53.414232   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:42:53.414247   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:42:53.478658   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:42:53.478689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:42:53.494577   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:42:53.494604   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:42:53.586460   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:42:53.586483   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:42:53.586500   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:42:53.697218   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:42:53.697251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0311 21:42:53.746291   70908 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:42:53.746336   70908 out.go:239] * 
	W0311 21:42:53.746388   70908 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.746409   70908 out.go:239] * 
	W0311 21:42:53.747362   70908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:42:53.750888   70908 out.go:177] 
	W0311 21:42:53.752146   70908 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.752211   70908 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:42:53.752239   70908 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:42:53.753832   70908 out.go:177] 
	
	
	==> CRI-O <==
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.866870366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193776866840988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=289e0df5-952e-4df1-892b-7fff2bfb4d83 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.867416562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccfce636-0814-40f4-94a6-f30f9514fe95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.867472897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccfce636-0814-40f4-94a6-f30f9514fe95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.867654116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb,PodSandboxId:1273cadfcc0af0128e40db4cc1aec0cf4d6b4e647ea1dc825630b79b5fe59a67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710193233737274617,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d4992a-803a-4064-b372-6ba9729bd2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 40dbf215,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff,PodSandboxId:57af43447b2b9ed98db403cc8c1acb7988045092f92f4e0c3def870fa0e2870f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193232074460457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qdcdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f100559-2b0a-4068-a3e7-475b5865a1d9,},Annotations:map[string]string{io.kubernetes.container.hash: 83254e48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9,PodSandboxId:c0b3aa5425dbf6a5e2d5d4a9babf54d2d68309733021f8c13a8055bb592981a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710193231622845396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4fwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b82ae7c-bffe-4fe4-b38c-3a789654df85,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7e889e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48,PodSandboxId:50b60fb7a7ec426aa08b804221ab2f1b361a3d378261ccc76c6ab8046c6fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193231917074606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kxjhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09678270-80f4-4bde-8080-
3a3a41ecb356,},Annotations:map[string]string{io.kubernetes.container.hash: 617d4e5e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406,PodSandboxId:f390039e37629f5d8df6f629009fd268d878278943426cd7419cacf42bfe0191,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171019321192518399
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f656d1b2a083ea3def41c157e42d64,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab,PodSandboxId:3d46c70fa47641c5b2a82cdc33f2b75a350f71a455d6d36f97913e79f6cd08b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193211864499246,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275286ceaa417bed6e079bc90d20c67f,},Annotations:map[string]string{io.kubernetes.container.hash: e52a60d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef,PodSandboxId:0591ee40586bdc0b3889628144b7e44bfa75ec5f170c66327354ee4b599957f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193211911941771,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547fe021a44521b4b353ab08995030b9,},Annotations:map[string]string{io.kubernetes.container.hash: be84fa1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597,PodSandboxId:83583f2ee62f5196d5006b51b95176333e9400ab7405bb1f18a001b46ab6b834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193211784281349,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a60ab38660991dda736a8865454b52c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccfce636-0814-40f4-94a6-f30f9514fe95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.919494086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ed5611e-92c4-40cf-aa38-bc91c05b386a name=/runtime.v1.RuntimeService/Version
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.919560293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ed5611e-92c4-40cf-aa38-bc91c05b386a name=/runtime.v1.RuntimeService/Version
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.921529012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c67760c-f4b9-4161-b441-78f5a4874873 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.921918099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193776921897470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c67760c-f4b9-4161-b441-78f5a4874873 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.922725151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21f6ba7d-10f6-465b-866d-1674aca9feca name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.922777725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21f6ba7d-10f6-465b-866d-1674aca9feca name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.923110685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb,PodSandboxId:1273cadfcc0af0128e40db4cc1aec0cf4d6b4e647ea1dc825630b79b5fe59a67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710193233737274617,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d4992a-803a-4064-b372-6ba9729bd2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 40dbf215,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff,PodSandboxId:57af43447b2b9ed98db403cc8c1acb7988045092f92f4e0c3def870fa0e2870f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193232074460457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qdcdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f100559-2b0a-4068-a3e7-475b5865a1d9,},Annotations:map[string]string{io.kubernetes.container.hash: 83254e48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9,PodSandboxId:c0b3aa5425dbf6a5e2d5d4a9babf54d2d68309733021f8c13a8055bb592981a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710193231622845396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4fwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b82ae7c-bffe-4fe4-b38c-3a789654df85,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7e889e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48,PodSandboxId:50b60fb7a7ec426aa08b804221ab2f1b361a3d378261ccc76c6ab8046c6fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193231917074606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kxjhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09678270-80f4-4bde-8080-
3a3a41ecb356,},Annotations:map[string]string{io.kubernetes.container.hash: 617d4e5e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406,PodSandboxId:f390039e37629f5d8df6f629009fd268d878278943426cd7419cacf42bfe0191,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171019321192518399
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f656d1b2a083ea3def41c157e42d64,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab,PodSandboxId:3d46c70fa47641c5b2a82cdc33f2b75a350f71a455d6d36f97913e79f6cd08b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193211864499246,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275286ceaa417bed6e079bc90d20c67f,},Annotations:map[string]string{io.kubernetes.container.hash: e52a60d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef,PodSandboxId:0591ee40586bdc0b3889628144b7e44bfa75ec5f170c66327354ee4b599957f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193211911941771,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547fe021a44521b4b353ab08995030b9,},Annotations:map[string]string{io.kubernetes.container.hash: be84fa1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597,PodSandboxId:83583f2ee62f5196d5006b51b95176333e9400ab7405bb1f18a001b46ab6b834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193211784281349,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a60ab38660991dda736a8865454b52c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21f6ba7d-10f6-465b-866d-1674aca9feca name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.976891737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9998a51b-9364-42b2-94ab-4776f39a8419 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.976959913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9998a51b-9364-42b2-94ab-4776f39a8419 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.979229951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7fa392f3-6d79-45c6-9fe1-0126ed4601cf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.979594291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193776979576622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7fa392f3-6d79-45c6-9fe1-0126ed4601cf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.980401348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cabacea-1c96-4c12-837a-cde668bac1bc name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.980450670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cabacea-1c96-4c12-837a-cde668bac1bc name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:36 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:36.980631847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb,PodSandboxId:1273cadfcc0af0128e40db4cc1aec0cf4d6b4e647ea1dc825630b79b5fe59a67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710193233737274617,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d4992a-803a-4064-b372-6ba9729bd2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 40dbf215,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff,PodSandboxId:57af43447b2b9ed98db403cc8c1acb7988045092f92f4e0c3def870fa0e2870f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193232074460457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qdcdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f100559-2b0a-4068-a3e7-475b5865a1d9,},Annotations:map[string]string{io.kubernetes.container.hash: 83254e48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9,PodSandboxId:c0b3aa5425dbf6a5e2d5d4a9babf54d2d68309733021f8c13a8055bb592981a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710193231622845396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4fwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b82ae7c-bffe-4fe4-b38c-3a789654df85,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7e889e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48,PodSandboxId:50b60fb7a7ec426aa08b804221ab2f1b361a3d378261ccc76c6ab8046c6fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193231917074606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kxjhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09678270-80f4-4bde-8080-
3a3a41ecb356,},Annotations:map[string]string{io.kubernetes.container.hash: 617d4e5e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406,PodSandboxId:f390039e37629f5d8df6f629009fd268d878278943426cd7419cacf42bfe0191,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171019321192518399
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f656d1b2a083ea3def41c157e42d64,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab,PodSandboxId:3d46c70fa47641c5b2a82cdc33f2b75a350f71a455d6d36f97913e79f6cd08b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193211864499246,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275286ceaa417bed6e079bc90d20c67f,},Annotations:map[string]string{io.kubernetes.container.hash: e52a60d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef,PodSandboxId:0591ee40586bdc0b3889628144b7e44bfa75ec5f170c66327354ee4b599957f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193211911941771,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547fe021a44521b4b353ab08995030b9,},Annotations:map[string]string{io.kubernetes.container.hash: be84fa1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597,PodSandboxId:83583f2ee62f5196d5006b51b95176333e9400ab7405bb1f18a001b46ab6b834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193211784281349,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a60ab38660991dda736a8865454b52c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cabacea-1c96-4c12-837a-cde668bac1bc name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:37 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:37.015096151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6732fb36-4f46-44a0-bffc-167e3b7d3320 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:49:37 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:37.015167795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6732fb36-4f46-44a0-bffc-167e3b7d3320 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:49:37 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:37.017606616Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be574654-e6f9-4e7f-af2d-33d5ca1224c2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:49:37 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:37.017964524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193777017945645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be574654-e6f9-4e7f-af2d-33d5ca1224c2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:49:37 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:37.019238828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3651f41c-0e49-4d4d-aede-51b50ca63419 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:37 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:37.019294590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3651f41c-0e49-4d4d-aede-51b50ca63419 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:49:37 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:49:37.019474584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb,PodSandboxId:1273cadfcc0af0128e40db4cc1aec0cf4d6b4e647ea1dc825630b79b5fe59a67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710193233737274617,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d4992a-803a-4064-b372-6ba9729bd2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 40dbf215,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff,PodSandboxId:57af43447b2b9ed98db403cc8c1acb7988045092f92f4e0c3def870fa0e2870f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193232074460457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qdcdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f100559-2b0a-4068-a3e7-475b5865a1d9,},Annotations:map[string]string{io.kubernetes.container.hash: 83254e48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9,PodSandboxId:c0b3aa5425dbf6a5e2d5d4a9babf54d2d68309733021f8c13a8055bb592981a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710193231622845396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4fwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b82ae7c-bffe-4fe4-b38c-3a789654df85,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7e889e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48,PodSandboxId:50b60fb7a7ec426aa08b804221ab2f1b361a3d378261ccc76c6ab8046c6fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193231917074606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kxjhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09678270-80f4-4bde-8080-
3a3a41ecb356,},Annotations:map[string]string{io.kubernetes.container.hash: 617d4e5e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406,PodSandboxId:f390039e37629f5d8df6f629009fd268d878278943426cd7419cacf42bfe0191,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171019321192518399
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f656d1b2a083ea3def41c157e42d64,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab,PodSandboxId:3d46c70fa47641c5b2a82cdc33f2b75a350f71a455d6d36f97913e79f6cd08b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193211864499246,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275286ceaa417bed6e079bc90d20c67f,},Annotations:map[string]string{io.kubernetes.container.hash: e52a60d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef,PodSandboxId:0591ee40586bdc0b3889628144b7e44bfa75ec5f170c66327354ee4b599957f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193211911941771,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547fe021a44521b4b353ab08995030b9,},Annotations:map[string]string{io.kubernetes.container.hash: be84fa1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597,PodSandboxId:83583f2ee62f5196d5006b51b95176333e9400ab7405bb1f18a001b46ab6b834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193211784281349,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a60ab38660991dda736a8865454b52c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3651f41c-0e49-4d4d-aede-51b50ca63419 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a57a390f8c15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   1273cadfcc0af       storage-provisioner
	abf1b35d40e2a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   57af43447b2b9       coredns-5dd5756b68-qdcdw
	7781ea4a1ef60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   50b60fb7a7ec4       coredns-5dd5756b68-kxjhf
	5f89c052cdef5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   c0b3aa5425dbf       kube-proxy-t4fwc
	dd5fb8d4fec27       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   f390039e37629       kube-scheduler-default-k8s-diff-port-766430
	3c1dc225baf7b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   0591ee40586bd       kube-apiserver-default-k8s-diff-port-766430
	5f93f14553d94       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   3d46c70fa4764       etcd-default-k8s-diff-port-766430
	63bbf59add0cd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   83583f2ee62f5       kube-controller-manager-default-k8s-diff-port-766430
	
	
	==> coredns [7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-766430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-766430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=default-k8s-diff-port-766430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T21_40_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:40:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-766430
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:49:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:45:46 +0000   Mon, 11 Mar 2024 21:40:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:45:46 +0000   Mon, 11 Mar 2024 21:40:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:45:46 +0000   Mon, 11 Mar 2024 21:40:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:45:46 +0000   Mon, 11 Mar 2024 21:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.11
	  Hostname:    default-k8s-diff-port-766430
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 68d6742a9a424b7182e2499f72626db5
	  System UUID:                68d6742a-9a42-4b71-82e2-499f72626db5
	  Boot ID:                    3effb575-f6a2-493a-bef9-c4a2015cfb66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-kxjhf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-5dd5756b68-qdcdw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-766430                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-766430             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-766430    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-t4fwc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-766430             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-9slpq                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m26s)  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node default-k8s-diff-port-766430 event: Registered Node default-k8s-diff-port-766430 in Controller
	
	
	==> dmesg <==
	[  +0.063050] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050568] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.028029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.527508] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.730561] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar11 21:35] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.064178] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070096] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.189536] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.159773] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.282150] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +5.558152] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +0.069754] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.054410] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +5.674994] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.077885] kauditd_printk_skb: 74 callbacks suppressed
	[Mar11 21:40] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.424411] systemd-fstab-generator[3413]: Ignoring "noauto" option for root device
	[  +7.763340] systemd-fstab-generator[3731]: Ignoring "noauto" option for root device
	[  +0.078979] kauditd_printk_skb: 55 callbacks suppressed
	[ +12.402976] systemd-fstab-generator[3918]: Ignoring "noauto" option for root device
	[  +0.107452] kauditd_printk_skb: 12 callbacks suppressed
	[Mar11 21:41] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab] <==
	{"level":"info","ts":"2024-03-11T21:40:12.321489Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T21:40:12.321707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 switched to configuration voters=(2924367896750053921)"}
	{"level":"info","ts":"2024-03-11T21:40:12.321828Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fb6e72b45dde42f9","local-member-id":"2895711bae57da21","added-peer-id":"2895711bae57da21","added-peer-peer-urls":["https://192.168.61.11:2380"]}
	{"level":"info","ts":"2024-03-11T21:40:12.32184Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2895711bae57da21","initial-advertise-peer-urls":["https://192.168.61.11:2380"],"listen-peer-urls":["https://192.168.61.11:2380"],"advertise-client-urls":["https://192.168.61.11:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.11:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T21:40:12.322413Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T21:40:12.321943Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.11:2380"}
	{"level":"info","ts":"2024-03-11T21:40:12.326192Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.11:2380"}
	{"level":"info","ts":"2024-03-11T21:40:13.07007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-11T21:40:13.07017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-11T21:40:13.070213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 received MsgPreVoteResp from 2895711bae57da21 at term 1"}
	{"level":"info","ts":"2024-03-11T21:40:13.070247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T21:40:13.070279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 received MsgVoteResp from 2895711bae57da21 at term 2"}
	{"level":"info","ts":"2024-03-11T21:40:13.070306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 became leader at term 2"}
	{"level":"info","ts":"2024-03-11T21:40:13.07033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2895711bae57da21 elected leader 2895711bae57da21 at term 2"}
	{"level":"info","ts":"2024-03-11T21:40:13.075167Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:40:13.079193Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb6e72b45dde42f9","local-member-id":"2895711bae57da21","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:40:13.07932Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:40:13.079357Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:40:13.079393Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2895711bae57da21","local-member-attributes":"{Name:default-k8s-diff-port-766430 ClientURLs:[https://192.168.61.11:2379]}","request-path":"/0/members/2895711bae57da21/attributes","cluster-id":"fb6e72b45dde42f9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T21:40:13.079537Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:40:13.08079Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T21:40:13.083165Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:40:13.084134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.11:2379"}
	{"level":"info","ts":"2024-03-11T21:40:13.086067Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T21:40:13.086118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:49:37 up 14 min,  0 users,  load average: 0.26, 0.24, 0.18
	Linux default-k8s-diff-port-766430 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef] <==
	W0311 21:45:16.137236       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:45:16.137330       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:45:16.137357       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:45:16.137285       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:45:16.137481       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:45:16.138473       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:46:15.036178       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:46:16.138486       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:46:16.138548       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:46:16.138554       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:46:16.138622       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:46:16.138673       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:46:16.139716       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:47:15.036935       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 21:48:15.036605       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:48:16.138920       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:48:16.139217       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:48:16.139335       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:48:16.140298       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:48:16.140474       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:48:16.140509       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:49:15.036546       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597] <==
	I0311 21:44:00.618448       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:44:30.050677       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:44:30.627385       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:45:00.057589       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:45:00.637525       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:45:30.064454       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:45:30.647375       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:46:00.070794       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:46:00.655869       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:46:22.842544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="577.166µs"
	E0311 21:46:30.080206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:46:30.664769       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:46:36.832698       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="147.73µs"
	E0311 21:47:00.089049       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:47:00.675960       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:47:30.096825       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:47:30.687564       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:48:00.103563       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:48:00.700610       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:48:30.109286       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:48:30.708944       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:49:00.115844       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:49:00.719776       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:49:30.122757       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:49:30.728926       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9] <==
	I0311 21:40:32.661526       1 server_others.go:69] "Using iptables proxy"
	I0311 21:40:32.690174       1 node.go:141] Successfully retrieved node IP: 192.168.61.11
	I0311 21:40:32.839195       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 21:40:32.839248       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:40:32.841774       1 server_others.go:152] "Using iptables Proxier"
	I0311 21:40:32.849519       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:40:32.850086       1 server.go:846] "Version info" version="v1.28.4"
	I0311 21:40:32.850166       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:40:32.852381       1 config.go:188] "Starting service config controller"
	I0311 21:40:32.852895       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:40:32.852925       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:40:32.852930       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:40:32.854659       1 config.go:315] "Starting node config controller"
	I0311 21:40:32.854701       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:40:32.953120       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 21:40:32.953185       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:40:32.969371       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406] <==
	E0311 21:40:15.240108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 21:40:15.240546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 21:40:15.240714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 21:40:15.241103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 21:40:15.241250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 21:40:15.241586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 21:40:15.226115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 21:40:15.243891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 21:40:15.244585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 21:40:15.244322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 21:40:16.147598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 21:40:16.147693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 21:40:16.195326       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 21:40:16.195378       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 21:40:16.273959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 21:40:16.274154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 21:40:16.346703       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 21:40:16.346811       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:40:16.354906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 21:40:16.355090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 21:40:16.400135       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 21:40:16.400256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 21:40:16.485580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 21:40:16.485672       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0311 21:40:18.092143       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:47:18 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:47:18.873698    3738 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:47:18 default-k8s-diff-port-766430 kubelet[3738]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:47:18 default-k8s-diff-port-766430 kubelet[3738]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:47:18 default-k8s-diff-port-766430 kubelet[3738]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:47:18 default-k8s-diff-port-766430 kubelet[3738]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:47:30 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:47:30.810140    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:47:41 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:47:41.810248    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:47:55 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:47:55.810289    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:48:08 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:48:08.811184    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:48:18 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:48:18.873277    3738 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:48:18 default-k8s-diff-port-766430 kubelet[3738]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:48:18 default-k8s-diff-port-766430 kubelet[3738]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:48:18 default-k8s-diff-port-766430 kubelet[3738]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:48:18 default-k8s-diff-port-766430 kubelet[3738]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:48:20 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:48:20.813435    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:48:31 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:48:31.810510    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:48:45 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:48:45.810401    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:48:58 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:48:58.810592    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:49:11 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:49:11.810330    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:49:18 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:49:18.873430    3738 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:49:18 default-k8s-diff-port-766430 kubelet[3738]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:49:18 default-k8s-diff-port-766430 kubelet[3738]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:49:18 default-k8s-diff-port-766430 kubelet[3738]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:49:18 default-k8s-diff-port-766430 kubelet[3738]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:49:23 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:49:23.810927    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	
	
	==> storage-provisioner [8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb] <==
	I0311 21:40:33.886051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 21:40:33.902810       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 21:40:33.902884       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 21:40:33.917132       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 21:40:33.917409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766430_dd940636-24d5-4105-81b4-842f67ac10d7!
	I0311 21:40:33.919368       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbc8c8b6-2640-4db0-907a-adf39a31a724", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-766430_dd940636-24d5-4105-81b4-842f67ac10d7 became leader
	I0311 21:40:34.018249       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766430_dd940636-24d5-4105-81b4-842f67ac10d7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-766430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9slpq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-766430 describe pod metrics-server-57f55c9bc5-9slpq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-766430 describe pod metrics-server-57f55c9bc5-9slpq: exit status 1 (64.556916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9slpq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-766430 describe pod metrics-server-57f55c9bc5-9slpq: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:43:51.607791   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:43:51.643974   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:44:00.191009   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:44:18.472100   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:44:51.177529   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:45:14.653321   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:45:14.687487   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:45:41.984157   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:46:14.223231   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:46:23.916191   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:46:28.681133   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:46:58.807849   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:47:37.144657   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:47:38.935286   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:47:55.427138   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:48:51.608867   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:48:51.644063   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:49:51.176940   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:50:01.855464   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:51:23.916505   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:51:28.681839   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 2 (251.368638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-239315" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 2 (248.317172ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-239315 logs -n 25
E0311 21:51:58.808417   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-239315 logs -n 25: (1.513960262s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-427678 sudo cat                              | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo find                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo crio                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-427678                                       | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-124446 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | disable-driver-mounts-124446                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:26 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-766430  | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-324578             | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:30:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:30:01.044166   70908 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:30:01.044254   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044259   70908 out.go:304] Setting ErrFile to fd 2...
	I0311 21:30:01.044263   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044451   70908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:30:01.044970   70908 out.go:298] Setting JSON to false
	I0311 21:30:01.045838   70908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7950,"bootTime":1710184651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:30:01.045894   70908 start.go:139] virtualization: kvm guest
	I0311 21:30:01.048311   70908 out.go:177] * [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:30:01.050003   70908 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:30:01.050011   70908 notify.go:220] Checking for updates...
	I0311 21:30:01.051498   70908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:30:01.052999   70908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:30:01.054439   70908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:30:01.055768   70908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:30:01.057137   70908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:30:01.058760   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:30:01.059167   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.059205   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.073734   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0311 21:30:01.074087   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.074586   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.074618   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.074966   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.075173   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.077005   70908 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 21:30:01.078583   70908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:30:01.078879   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.078914   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.093226   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0311 21:30:01.093614   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.094174   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.094243   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.094616   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.094805   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.128302   70908 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:30:01.129965   70908 start.go:297] selected driver: kvm2
	I0311 21:30:01.129991   70908 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.130113   70908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:30:01.131050   70908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.131115   70908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:30:01.145452   70908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:30:01.145782   70908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:30:01.145811   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:30:01.145819   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:30:01.145863   70908 start.go:340] cluster config:
	{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.145954   70908 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.147725   70908 out.go:177] * Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	I0311 21:30:01.148916   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:30:01.148943   70908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:30:01.148955   70908 cache.go:56] Caching tarball of preloaded images
	I0311 21:30:01.149022   70908 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:30:01.149032   70908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:30:01.149114   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:30:01.149263   70908 start.go:360] acquireMachinesLock for old-k8s-version-239315: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:30:05.352968   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:08.425086   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:14.504922   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:17.577080   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:23.656996   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:26.729009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:32.809042   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:35.881008   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:41.960992   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:45.033096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:51.112925   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:54.184989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:00.265058   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:03.337012   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:09.416960   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:12.489005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:18.569021   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:21.640990   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:27.721019   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:30.793040   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:36.872985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:39.945005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:46.025035   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:49.096988   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:55.176985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:58.249009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:04.328981   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:07.401006   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:13.480986   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:16.552965   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:22.632997   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:25.705064   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:31.784993   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:34.857027   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:40.937002   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:44.008989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:50.088959   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:53.161092   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:59.241045   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:02.313084   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:08.393056   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:11.465079   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:17.545057   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:20.617082   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:26.697000   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:29.768926   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:35.849024   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:38.921096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:41.925305   70458 start.go:364] duration metric: took 4m36.419231792s to acquireMachinesLock for "no-preload-324578"
	I0311 21:33:41.925360   70458 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:33:41.925368   70458 fix.go:54] fixHost starting: 
	I0311 21:33:41.925768   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:33:41.925798   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:33:41.940654   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0311 21:33:41.941130   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:33:41.941619   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:33:41.941646   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:33:41.942045   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:33:41.942209   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:33:41.942370   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:33:41.944009   70458 fix.go:112] recreateIfNeeded on no-preload-324578: state=Stopped err=<nil>
	I0311 21:33:41.944030   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	W0311 21:33:41.944231   70458 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:33:41.946020   70458 out.go:177] * Restarting existing kvm2 VM for "no-preload-324578" ...
	I0311 21:33:41.922711   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:33:41.922754   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923131   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:33:41.923158   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923430   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:33:41.925178   70417 machine.go:97] duration metric: took 4m37.414792129s to provisionDockerMachine
	I0311 21:33:41.925213   70417 fix.go:56] duration metric: took 4m37.435982654s for fixHost
	I0311 21:33:41.925219   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 4m37.436000925s
	W0311 21:33:41.925242   70417 start.go:713] error starting host: provision: host is not running
	W0311 21:33:41.925330   70417 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0311 21:33:41.925343   70417 start.go:728] Will try again in 5 seconds ...
	I0311 21:33:41.947495   70458 main.go:141] libmachine: (no-preload-324578) Calling .Start
	I0311 21:33:41.947676   70458 main.go:141] libmachine: (no-preload-324578) Ensuring networks are active...
	I0311 21:33:41.948386   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network default is active
	I0311 21:33:41.948724   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network mk-no-preload-324578 is active
	I0311 21:33:41.949117   70458 main.go:141] libmachine: (no-preload-324578) Getting domain xml...
	I0311 21:33:41.949876   70458 main.go:141] libmachine: (no-preload-324578) Creating domain...
	I0311 21:33:43.129733   70458 main.go:141] libmachine: (no-preload-324578) Waiting to get IP...
	I0311 21:33:43.130601   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.131006   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.131053   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.130975   71444 retry.go:31] will retry after 209.203314ms: waiting for machine to come up
	I0311 21:33:43.341724   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.342324   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.342361   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.342279   71444 retry.go:31] will retry after 375.396917ms: waiting for machine to come up
	I0311 21:33:43.718906   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.719329   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.719351   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.719288   71444 retry.go:31] will retry after 428.365393ms: waiting for machine to come up
	I0311 21:33:44.148895   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.149334   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.149358   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.149284   71444 retry.go:31] will retry after 561.478535ms: waiting for machine to come up
	I0311 21:33:44.712065   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.712548   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.712576   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.712465   71444 retry.go:31] will retry after 700.993236ms: waiting for machine to come up
	I0311 21:33:46.926379   70417 start.go:360] acquireMachinesLock for default-k8s-diff-port-766430: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:33:45.415695   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:45.416242   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:45.416276   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:45.416215   71444 retry.go:31] will retry after 809.474202ms: waiting for machine to come up
	I0311 21:33:46.227098   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:46.227573   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:46.227608   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:46.227520   71444 retry.go:31] will retry after 1.075187328s: waiting for machine to come up
	I0311 21:33:47.303981   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:47.304454   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:47.304483   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:47.304397   71444 retry.go:31] will retry after 1.145290319s: waiting for machine to come up
	I0311 21:33:48.451871   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:48.452316   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:48.452350   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:48.452267   71444 retry.go:31] will retry after 1.172261063s: waiting for machine to come up
	I0311 21:33:49.626502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:49.627067   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:49.627089   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:49.627023   71444 retry.go:31] will retry after 2.201479026s: waiting for machine to come up
	I0311 21:33:51.831519   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:51.831972   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:51.832008   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:51.831905   71444 retry.go:31] will retry after 2.888101699s: waiting for machine to come up
	I0311 21:33:54.721322   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:54.721753   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:54.721773   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:54.721722   71444 retry.go:31] will retry after 3.512655296s: waiting for machine to come up
	I0311 21:33:58.235767   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:58.236180   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:58.236219   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:58.236141   71444 retry.go:31] will retry after 3.975760652s: waiting for machine to come up
	I0311 21:34:03.525918   70604 start.go:364] duration metric: took 4m44.449252209s to acquireMachinesLock for "embed-certs-743937"
	I0311 21:34:03.525995   70604 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:03.526008   70604 fix.go:54] fixHost starting: 
	I0311 21:34:03.526428   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:03.526470   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:03.542427   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0311 21:34:03.542857   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:03.543292   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:34:03.543317   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:03.543616   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:03.543806   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:03.543991   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:34:03.545366   70604 fix.go:112] recreateIfNeeded on embed-certs-743937: state=Stopped err=<nil>
	I0311 21:34:03.545391   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	W0311 21:34:03.545540   70604 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:03.548158   70604 out.go:177] * Restarting existing kvm2 VM for "embed-certs-743937" ...
	I0311 21:34:03.549803   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Start
	I0311 21:34:03.549966   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring networks are active...
	I0311 21:34:03.550712   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network default is active
	I0311 21:34:03.551124   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network mk-embed-certs-743937 is active
	I0311 21:34:03.551528   70604 main.go:141] libmachine: (embed-certs-743937) Getting domain xml...
	I0311 21:34:03.552226   70604 main.go:141] libmachine: (embed-certs-743937) Creating domain...
	I0311 21:34:02.213709   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214152   70458 main.go:141] libmachine: (no-preload-324578) Found IP for machine: 192.168.39.36
	I0311 21:34:02.214181   70458 main.go:141] libmachine: (no-preload-324578) Reserving static IP address...
	I0311 21:34:02.214196   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has current primary IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214631   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.214655   70458 main.go:141] libmachine: (no-preload-324578) DBG | skip adding static IP to network mk-no-preload-324578 - found existing host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"}
	I0311 21:34:02.214666   70458 main.go:141] libmachine: (no-preload-324578) Reserved static IP address: 192.168.39.36
	I0311 21:34:02.214680   70458 main.go:141] libmachine: (no-preload-324578) Waiting for SSH to be available...
	I0311 21:34:02.214704   70458 main.go:141] libmachine: (no-preload-324578) DBG | Getting to WaitForSSH function...
	I0311 21:34:02.216798   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217068   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.217111   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217285   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH client type: external
	I0311 21:34:02.217316   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa (-rw-------)
	I0311 21:34:02.217356   70458 main.go:141] libmachine: (no-preload-324578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:02.217374   70458 main.go:141] libmachine: (no-preload-324578) DBG | About to run SSH command:
	I0311 21:34:02.217389   70458 main.go:141] libmachine: (no-preload-324578) DBG | exit 0
	I0311 21:34:02.340837   70458 main.go:141] libmachine: (no-preload-324578) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:02.341154   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetConfigRaw
	I0311 21:34:02.341752   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.344368   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344756   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.344791   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344942   70458 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/config.json ...
	I0311 21:34:02.345142   70458 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:02.345159   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:02.345353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.347647   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348001   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.348029   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348118   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.348284   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348432   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.348704   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.348913   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.348925   70458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:02.457273   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:02.457298   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457523   70458 buildroot.go:166] provisioning hostname "no-preload-324578"
	I0311 21:34:02.457554   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457757   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.460347   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460658   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.460688   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460913   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.461126   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461286   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461415   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.461574   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.461758   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.461775   70458 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-324578 && echo "no-preload-324578" | sudo tee /etc/hostname
	I0311 21:34:02.583388   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-324578
	
	I0311 21:34:02.583414   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.586043   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586399   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.586431   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586592   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.586799   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.586957   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.587084   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.587271   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.587433   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.587449   70458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-324578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-324578/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-324578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:02.702365   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:02.702399   70458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:02.702420   70458 buildroot.go:174] setting up certificates
	I0311 21:34:02.702431   70458 provision.go:84] configureAuth start
	I0311 21:34:02.702439   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.702725   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.705459   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.705882   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.705902   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.706048   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.708166   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708476   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.708502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708618   70458 provision.go:143] copyHostCerts
	I0311 21:34:02.708675   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:02.708684   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:02.708764   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:02.708875   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:02.708885   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:02.708911   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:02.708977   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:02.708984   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:02.709005   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:02.709063   70458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.no-preload-324578 san=[127.0.0.1 192.168.39.36 localhost minikube no-preload-324578]
	I0311 21:34:02.823423   70458 provision.go:177] copyRemoteCerts
	I0311 21:34:02.823484   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:02.823508   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.826221   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826538   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.826584   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826743   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.826974   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.827158   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.827344   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:02.912138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:02.938138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:34:02.967391   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:02.992208   70458 provision.go:87] duration metric: took 289.765831ms to configureAuth
	I0311 21:34:02.992232   70458 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:02.992376   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:02.992440   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.994808   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995124   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.995154   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995315   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.995490   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995640   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995818   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.995997   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.996187   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.996202   70458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:03.283611   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:03.283643   70458 machine.go:97] duration metric: took 938.487892ms to provisionDockerMachine
	I0311 21:34:03.283655   70458 start.go:293] postStartSetup for "no-preload-324578" (driver="kvm2")
	I0311 21:34:03.283667   70458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:03.283681   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.284008   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:03.284043   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.286802   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287220   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.287262   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287379   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.287546   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.287731   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.287930   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.372563   70458 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:03.377151   70458 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:03.377171   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:03.377225   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:03.377291   70458 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:03.377377   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:03.387792   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:03.412721   70458 start.go:296] duration metric: took 129.055927ms for postStartSetup
	I0311 21:34:03.412770   70458 fix.go:56] duration metric: took 21.487401487s for fixHost
	I0311 21:34:03.412790   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.415209   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415507   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.415533   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415668   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.415866   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416035   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416179   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.416353   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:03.416502   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:03.416513   70458 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:03.525759   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192843.475283818
	
	I0311 21:34:03.525781   70458 fix.go:216] guest clock: 1710192843.475283818
	I0311 21:34:03.525790   70458 fix.go:229] Guest: 2024-03-11 21:34:03.475283818 +0000 UTC Remote: 2024-03-11 21:34:03.412775346 +0000 UTC m=+298.052241307 (delta=62.508472ms)
	I0311 21:34:03.525815   70458 fix.go:200] guest clock delta is within tolerance: 62.508472ms
	I0311 21:34:03.525833   70458 start.go:83] releasing machines lock for "no-preload-324578", held for 21.600490138s
	I0311 21:34:03.525866   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.526157   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:03.528771   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529117   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.529143   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529272   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529721   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529897   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529978   70458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:03.530022   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.530124   70458 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:03.530151   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.532450   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532624   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532813   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.532843   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533001   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533010   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.533034   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533171   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533197   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533350   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533504   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533506   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.533639   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.614855   70458 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:03.638835   70458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:03.787832   70458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:03.794627   70458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:03.794677   70458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:03.811771   70458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:03.811790   70458 start.go:494] detecting cgroup driver to use...
	I0311 21:34:03.811845   70458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:03.829561   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:03.844536   70458 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:03.844582   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:03.859811   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:03.875041   70458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:03.991456   70458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:04.174783   70458 docker.go:233] disabling docker service ...
	I0311 21:34:04.174848   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:04.192524   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:04.206906   70458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:04.340047   70458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:04.455686   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:04.472512   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:04.495487   70458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:04.495550   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.506921   70458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:04.506997   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.519408   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.531418   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.543684   70458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:04.555846   70458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:04.567610   70458 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:04.567658   70458 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:04.583015   70458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:04.594515   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:04.715185   70458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:04.872750   70458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:04.872848   70458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:04.878207   70458 start.go:562] Will wait 60s for crictl version
	I0311 21:34:04.878250   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.882436   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:04.921007   70458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:04.921079   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.959326   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.997595   70458 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:34:04.999092   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:05.002092   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002526   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:05.002566   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002790   70458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:05.007758   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:05.023330   70458 kubeadm.go:877] updating cluster {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:05.023430   70458 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:34:05.023461   70458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:05.063043   70458 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:34:05.063071   70458 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:05.063161   70458 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.063170   70458 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.063183   70458 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.063190   70458 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.063233   70458 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0311 21:34:05.063171   70458 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.063272   70458 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.063307   70458 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065013   70458 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.065019   70458 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.065020   70458 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.065045   70458 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0311 21:34:05.065017   70458 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.065018   70458 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065064   70458 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.065365   70458 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.209182   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.211431   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.220663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.230965   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.237859   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0311 21:34:05.260820   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.288596   70458 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0311 21:34:05.288651   70458 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.288697   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.324896   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.342987   70458 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0311 21:34:05.343030   70458 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.343080   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.371663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.377262   70458 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0311 21:34:05.377306   70458 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.377349   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.792889   70604 main.go:141] libmachine: (embed-certs-743937) Waiting to get IP...
	I0311 21:34:04.793678   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:04.794097   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:04.794152   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:04.794064   71579 retry.go:31] will retry after 281.522937ms: waiting for machine to come up
	I0311 21:34:05.077518   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.077856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.077889   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.077814   71579 retry.go:31] will retry after 303.836522ms: waiting for machine to come up
	I0311 21:34:05.383244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.383796   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.383839   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.383758   71579 retry.go:31] will retry after 333.172379ms: waiting for machine to come up
	I0311 21:34:05.718117   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.718603   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.718630   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.718562   71579 retry.go:31] will retry after 469.046827ms: waiting for machine to come up
	I0311 21:34:06.189304   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.189748   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.189777   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.189705   71579 retry.go:31] will retry after 636.781259ms: waiting for machine to come up
	I0311 21:34:06.828672   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.829136   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.829174   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.829078   71579 retry.go:31] will retry after 758.609427ms: waiting for machine to come up
	I0311 21:34:07.589134   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:07.589490   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:07.589513   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:07.589466   71579 retry.go:31] will retry after 990.575872ms: waiting for machine to come up
	I0311 21:34:08.581971   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:08.582312   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:08.582344   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:08.582290   71579 retry.go:31] will retry after 1.142377902s: waiting for machine to come up
	I0311 21:34:05.421288   70458 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0311 21:34:05.421340   70458 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.421390   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473450   70458 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0311 21:34:05.473497   70458 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.473527   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.473545   70458 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0311 21:34:05.473584   70458 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.473603   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.473639   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473663   70458 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0311 21:34:05.473701   70458 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.473707   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.473730   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473766   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.569510   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.569615   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.578915   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0311 21:34:05.578979   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579007   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:05.579029   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.579077   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579117   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.579158   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.579209   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0311 21:34:05.579272   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:05.584413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0311 21:34:05.584425   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.584458   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.679191   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 21:34:05.679259   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0311 21:34:05.679288   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:05.679337   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679368   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0311 21:34:05.679369   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0311 21:34:05.679414   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679428   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0311 21:34:05.679485   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:07.621341   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.942028932s)
	I0311 21:34:07.621382   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0311 21:34:07.621385   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.941873405s)
	I0311 21:34:07.621413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621424   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (1.941989707s)
	I0311 21:34:07.621452   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621544   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.037072472s)
	I0311 21:34:07.621558   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0311 21:34:07.621580   70458 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:07.621627   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:09.726761   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:09.727207   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:09.727241   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:09.727153   71579 retry.go:31] will retry after 1.17092616s: waiting for machine to come up
	I0311 21:34:10.899311   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:10.899656   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:10.899675   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:10.899631   71579 retry.go:31] will retry after 1.870900402s: waiting for machine to come up
	I0311 21:34:12.771931   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:12.772421   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:12.772457   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:12.772375   71579 retry.go:31] will retry after 2.721804623s: waiting for machine to come up
	I0311 21:34:11.524646   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.902991705s)
	I0311 21:34:11.524683   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0311 21:34:11.524711   70458 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:11.524787   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:13.704750   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.179921724s)
	I0311 21:34:13.704786   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0311 21:34:13.704817   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:13.704868   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:15.496186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:15.496686   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:15.496722   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:15.496627   71579 retry.go:31] will retry after 2.568850361s: waiting for machine to come up
	I0311 21:34:18.068470   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:18.068926   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:18.068959   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:18.068872   71579 retry.go:31] will retry after 4.111366971s: waiting for machine to come up
	I0311 21:34:16.267427   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.562528088s)
	I0311 21:34:16.267458   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0311 21:34:16.267486   70458 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:16.267535   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:17.218029   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 21:34:17.218065   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:17.218104   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:18.987120   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.768996335s)
	I0311 21:34:18.987149   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0311 21:34:18.987167   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:18.987219   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:23.543571   70908 start.go:364] duration metric: took 4m22.394278247s to acquireMachinesLock for "old-k8s-version-239315"
	I0311 21:34:23.543649   70908 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:23.543661   70908 fix.go:54] fixHost starting: 
	I0311 21:34:23.544084   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:23.544139   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:23.561669   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0311 21:34:23.562158   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:23.562618   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:34:23.562645   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:23.562949   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:23.563114   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:23.563306   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetState
	I0311 21:34:23.565152   70908 fix.go:112] recreateIfNeeded on old-k8s-version-239315: state=Stopped err=<nil>
	I0311 21:34:23.565178   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	W0311 21:34:23.565351   70908 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:23.567943   70908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239315" ...
	I0311 21:34:22.182707   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183200   70604 main.go:141] libmachine: (embed-certs-743937) Found IP for machine: 192.168.50.114
	I0311 21:34:22.183228   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has current primary IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183238   70604 main.go:141] libmachine: (embed-certs-743937) Reserving static IP address...
	I0311 21:34:22.183694   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.183716   70604 main.go:141] libmachine: (embed-certs-743937) DBG | skip adding static IP to network mk-embed-certs-743937 - found existing host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"}
	I0311 21:34:22.183728   70604 main.go:141] libmachine: (embed-certs-743937) Reserved static IP address: 192.168.50.114
	I0311 21:34:22.183746   70604 main.go:141] libmachine: (embed-certs-743937) Waiting for SSH to be available...
	I0311 21:34:22.183760   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Getting to WaitForSSH function...
	I0311 21:34:22.185820   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186157   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.186193   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186285   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH client type: external
	I0311 21:34:22.186317   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa (-rw-------)
	I0311 21:34:22.186349   70604 main.go:141] libmachine: (embed-certs-743937) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:22.186368   70604 main.go:141] libmachine: (embed-certs-743937) DBG | About to run SSH command:
	I0311 21:34:22.186384   70604 main.go:141] libmachine: (embed-certs-743937) DBG | exit 0
	I0311 21:34:22.313253   70604 main.go:141] libmachine: (embed-certs-743937) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:22.313570   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetConfigRaw
	I0311 21:34:22.314271   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.317040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317404   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.317509   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317641   70604 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/config.json ...
	I0311 21:34:22.317814   70604 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:22.317830   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:22.318049   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.320550   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320833   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.320859   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320992   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.321223   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321405   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321547   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.321708   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.321930   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.321944   70604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:22.430028   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:22.430055   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430345   70604 buildroot.go:166] provisioning hostname "embed-certs-743937"
	I0311 21:34:22.430374   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430568   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.433555   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.433884   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.433907   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.434102   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.434311   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434474   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434611   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.434762   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.434936   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.434954   70604 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-743937 && echo "embed-certs-743937" | sudo tee /etc/hostname
	I0311 21:34:22.564819   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-743937
	
	I0311 21:34:22.564848   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.567667   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568075   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.568122   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568325   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.568519   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568719   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568913   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.569094   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.569335   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.569361   70604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-743937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-743937/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-743937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:22.684397   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:22.684425   70604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:22.684473   70604 buildroot.go:174] setting up certificates
	I0311 21:34:22.684490   70604 provision.go:84] configureAuth start
	I0311 21:34:22.684507   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.684840   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.687805   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688156   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.688178   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688401   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.690975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691302   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.691321   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691469   70604 provision.go:143] copyHostCerts
	I0311 21:34:22.691528   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:22.691540   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:22.691598   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:22.691690   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:22.691706   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:22.691729   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:22.691834   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:22.691850   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:22.691878   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:22.691946   70604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.embed-certs-743937 san=[127.0.0.1 192.168.50.114 embed-certs-743937 localhost minikube]
	I0311 21:34:22.838395   70604 provision.go:177] copyRemoteCerts
	I0311 21:34:22.838452   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:22.838478   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.840975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841308   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.841342   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841487   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.841684   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.841834   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.841968   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:22.924202   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:22.956079   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 21:34:22.982352   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 21:34:23.008286   70604 provision.go:87] duration metric: took 323.780619ms to configureAuth
	I0311 21:34:23.008311   70604 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:23.008481   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:34:23.008553   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.011128   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011439   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.011461   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011632   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.011780   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.011919   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.012094   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.012278   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.012436   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.012452   70604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:23.288122   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:23.288146   70604 machine.go:97] duration metric: took 970.321311ms to provisionDockerMachine
	I0311 21:34:23.288157   70604 start.go:293] postStartSetup for "embed-certs-743937" (driver="kvm2")
	I0311 21:34:23.288167   70604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:23.288180   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.288496   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:23.288532   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.291434   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.291823   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.291856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.292079   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.292297   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.292468   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.292629   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.376367   70604 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:23.381629   70604 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:23.381660   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:23.381754   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:23.381855   70604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:23.381967   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:23.392280   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:23.423241   70604 start.go:296] duration metric: took 135.071082ms for postStartSetup
	I0311 21:34:23.423283   70604 fix.go:56] duration metric: took 19.897275281s for fixHost
	I0311 21:34:23.423310   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.426264   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426623   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.426652   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426862   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.427052   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427256   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427419   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.427575   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.427809   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.427822   70604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:23.543425   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192863.499269756
	
	I0311 21:34:23.543447   70604 fix.go:216] guest clock: 1710192863.499269756
	I0311 21:34:23.543454   70604 fix.go:229] Guest: 2024-03-11 21:34:23.499269756 +0000 UTC Remote: 2024-03-11 21:34:23.423289031 +0000 UTC m=+304.494814333 (delta=75.980725ms)
	I0311 21:34:23.543472   70604 fix.go:200] guest clock delta is within tolerance: 75.980725ms
	I0311 21:34:23.543478   70604 start.go:83] releasing machines lock for "embed-certs-743937", held for 20.0175167s
	I0311 21:34:23.543504   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.543746   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:23.546763   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547188   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.547223   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547396   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.547882   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548077   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548163   70604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:23.548226   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.548282   70604 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:23.548309   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.551186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551485   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551609   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.551642   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551795   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.551979   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.552001   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.552035   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552146   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.552211   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552277   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552368   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.552501   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552666   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.660064   70604 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:23.668731   70604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:23.831784   70604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:23.840331   70604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:23.840396   70604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:23.864730   70604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:23.864766   70604 start.go:494] detecting cgroup driver to use...
	I0311 21:34:23.864831   70604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:23.886072   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:23.901660   70604 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:23.901727   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:23.917374   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:23.932525   70604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:24.066368   70604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:24.222425   70604 docker.go:233] disabling docker service ...
	I0311 21:34:24.222487   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:24.240937   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:24.257050   70604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:24.395003   70604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:24.550709   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:24.572524   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:24.599710   70604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:24.599776   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.612426   70604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:24.612514   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.626989   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.639576   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.653711   70604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:24.673581   70604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:24.684772   70604 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:24.684841   70604 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:24.707855   70604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:24.719801   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:24.904788   70604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:25.063437   70604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:25.063511   70604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:25.070294   70604 start.go:562] Will wait 60s for crictl version
	I0311 21:34:25.070352   70604 ssh_runner.go:195] Run: which crictl
	I0311 21:34:25.074945   70604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:25.121979   70604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:25.122070   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.159092   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.207391   70604 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:34:21.469205   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.481954559s)
	I0311 21:34:21.469242   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0311 21:34:21.469285   70458 cache_images.go:123] Successfully loaded all cached images
	I0311 21:34:21.469295   70458 cache_images.go:92] duration metric: took 16.40620232s to LoadCachedImages
	I0311 21:34:21.469306   70458 kubeadm.go:928] updating node { 192.168.39.36 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:34:21.469436   70458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-324578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:21.469513   70458 ssh_runner.go:195] Run: crio config
	I0311 21:34:21.531635   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:21.531659   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:21.531671   70458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:21.531690   70458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-324578 NodeName:no-preload-324578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:21.531820   70458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-324578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:21.531876   70458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:34:21.546000   70458 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:21.546060   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:21.558818   70458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0311 21:34:21.577685   70458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:34:21.595960   70458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0311 21:34:21.615003   70458 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:21.619290   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:21.633307   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:21.751586   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:21.771672   70458 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578 for IP: 192.168.39.36
	I0311 21:34:21.771698   70458 certs.go:194] generating shared ca certs ...
	I0311 21:34:21.771717   70458 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:21.771907   70458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:21.771975   70458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:21.771987   70458 certs.go:256] generating profile certs ...
	I0311 21:34:21.772093   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/client.key
	I0311 21:34:21.772190   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key.681a9200
	I0311 21:34:21.772244   70458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key
	I0311 21:34:21.772371   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:21.772421   70458 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:21.772435   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:21.772475   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:21.772509   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:21.772542   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:21.772606   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:21.773241   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:21.833566   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:21.868156   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:21.910118   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:21.952222   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:34:21.988148   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:22.018493   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:22.045225   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:22.071481   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:22.097525   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:22.123425   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:22.156613   70458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:22.174679   70458 ssh_runner.go:195] Run: openssl version
	I0311 21:34:22.181137   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:22.197490   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203508   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203556   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.210822   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:22.224269   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:22.237282   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242762   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242816   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.249334   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:22.261866   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:22.273674   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279004   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279055   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.285394   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:22.299493   70458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:22.304827   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:22.311349   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:22.318377   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:22.325621   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:22.332316   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:22.338893   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:22.345167   70458 kubeadm.go:391] StartCluster: {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:22.345246   70458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:22.345286   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.386703   70458 cri.go:89] found id: ""
	I0311 21:34:22.386785   70458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:22.398475   70458 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:22.398494   70458 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:22.398500   70458 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:22.398558   70458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:22.409434   70458 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:22.410675   70458 kubeconfig.go:125] found "no-preload-324578" server: "https://192.168.39.36:8443"
	I0311 21:34:22.412906   70458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:22.423677   70458 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.36
	I0311 21:34:22.423708   70458 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:22.423719   70458 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:22.423762   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.472548   70458 cri.go:89] found id: ""
	I0311 21:34:22.472615   70458 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:22.494701   70458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:22.506944   70458 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:22.506964   70458 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:22.507015   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:22.517468   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:22.517521   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:22.528281   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:22.538496   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:22.538533   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:22.553009   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.566120   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:22.566189   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.579239   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:22.590180   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:22.590227   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:22.602988   70458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:22.615631   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:22.730568   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.355205   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.588923   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.694870   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.796820   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:23.796918   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.297341   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.797197   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.840030   70458 api_server.go:72] duration metric: took 1.043209284s to wait for apiserver process to appear ...
	I0311 21:34:24.840062   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:24.840101   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:24.840560   70458 api_server.go:269] stopped: https://192.168.39.36:8443/healthz: Get "https://192.168.39.36:8443/healthz": dial tcp 192.168.39.36:8443: connect: connection refused
	I0311 21:34:25.341161   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:23.569356   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .Start
	I0311 21:34:23.569527   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring networks are active...
	I0311 21:34:23.570188   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network default is active
	I0311 21:34:23.570613   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network mk-old-k8s-version-239315 is active
	I0311 21:34:23.571070   70908 main.go:141] libmachine: (old-k8s-version-239315) Getting domain xml...
	I0311 21:34:23.571836   70908 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:34:24.895619   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting to get IP...
	I0311 21:34:24.896680   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:24.897160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:24.897218   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:24.897131   71714 retry.go:31] will retry after 268.563191ms: waiting for machine to come up
	I0311 21:34:25.167783   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.168312   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.168343   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.168268   71714 retry.go:31] will retry after 245.059124ms: waiting for machine to come up
	I0311 21:34:25.414644   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.415139   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.415168   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.415100   71714 retry.go:31] will retry after 407.807793ms: waiting for machine to come up
	I0311 21:34:25.824887   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.825351   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.825379   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.825274   71714 retry.go:31] will retry after 503.187834ms: waiting for machine to come up
	I0311 21:34:25.208819   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:25.211726   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212203   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:25.212244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212486   70604 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:25.217365   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:25.233670   70604 kubeadm.go:877] updating cluster {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:25.233825   70604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:34:25.233886   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:25.282028   70604 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:34:25.282108   70604 ssh_runner.go:195] Run: which lz4
	I0311 21:34:25.287047   70604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:25.291721   70604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:25.291751   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:34:27.414481   70604 crio.go:444] duration metric: took 2.127464595s to copy over tarball
	I0311 21:34:27.414554   70604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:28.225996   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.226031   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.226048   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.285274   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.285307   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.340493   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.512353   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.512409   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:28.840800   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.852523   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.852560   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.341135   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.354997   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:29.355028   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.840769   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.848023   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:34:29.856262   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:34:29.856290   70458 api_server.go:131] duration metric: took 5.016219789s to wait for apiserver health ...
	I0311 21:34:29.856300   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:29.856308   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:29.858297   70458 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:29.859734   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:29.891375   70458 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:29.932393   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:29.959208   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:29.959257   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:29.959268   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:29.959278   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:29.959290   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:29.959296   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:34:29.959303   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:29.959319   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:29.959335   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:34:29.959343   70458 system_pods.go:74] duration metric: took 26.926978ms to wait for pod list to return data ...
	I0311 21:34:29.959355   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:29.963151   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:29.963179   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:29.963193   70458 node_conditions.go:105] duration metric: took 3.825246ms to run NodePressure ...
	I0311 21:34:29.963209   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:26.330005   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:26.330547   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:26.330569   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:26.330464   71714 retry.go:31] will retry after 723.914956ms: waiting for machine to come up
	I0311 21:34:27.056271   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.056879   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.056901   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.056834   71714 retry.go:31] will retry after 693.583075ms: waiting for machine to come up
	I0311 21:34:27.752514   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.752958   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.752980   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.752916   71714 retry.go:31] will retry after 902.247864ms: waiting for machine to come up
	I0311 21:34:28.657551   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:28.658023   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:28.658079   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:28.658008   71714 retry.go:31] will retry after 1.140425887s: waiting for machine to come up
	I0311 21:34:29.800305   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:29.800824   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:29.800852   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:29.800774   71714 retry.go:31] will retry after 1.68593342s: waiting for machine to come up
	I0311 21:34:32.367999   70458 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.404768175s)
	I0311 21:34:32.368034   70458 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375444   70458 kubeadm.go:733] kubelet initialised
	I0311 21:34:32.375468   70458 kubeadm.go:734] duration metric: took 7.423643ms waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375477   70458 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:32.383579   70458 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.389728   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389755   70458 pod_ready.go:81] duration metric: took 6.144226ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.389766   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389775   70458 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.398797   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398822   70458 pod_ready.go:81] duration metric: took 9.033188ms for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.398833   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398841   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.407870   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407905   70458 pod_ready.go:81] duration metric: took 9.056349ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.407915   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407928   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.414434   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414455   70458 pod_ready.go:81] duration metric: took 6.519611ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.414463   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414468   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.771994   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772025   70458 pod_ready.go:81] duration metric: took 357.549783ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.772034   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772041   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.175562   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175595   70458 pod_ready.go:81] duration metric: took 403.546508ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.175608   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175617   70458 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.573749   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573777   70458 pod_ready.go:81] duration metric: took 398.141162ms for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.573789   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573799   70458 pod_ready.go:38] duration metric: took 1.198311127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:33.573862   70458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:34:33.592112   70458 ops.go:34] apiserver oom_adj: -16
	I0311 21:34:33.592148   70458 kubeadm.go:591] duration metric: took 11.193640837s to restartPrimaryControlPlane
	I0311 21:34:33.592161   70458 kubeadm.go:393] duration metric: took 11.247001751s to StartCluster
	I0311 21:34:33.592181   70458 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.592269   70458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:33.594144   70458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.594461   70458 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:34:33.596303   70458 out.go:177] * Verifying Kubernetes components...
	I0311 21:34:33.594553   70458 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:34:33.594702   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:33.597724   70458 addons.go:69] Setting default-storageclass=true in profile "no-preload-324578"
	I0311 21:34:33.597727   70458 addons.go:69] Setting storage-provisioner=true in profile "no-preload-324578"
	I0311 21:34:33.597739   70458 addons.go:69] Setting metrics-server=true in profile "no-preload-324578"
	I0311 21:34:33.597759   70458 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-324578"
	I0311 21:34:33.597771   70458 addons.go:234] Setting addon storage-provisioner=true in "no-preload-324578"
	I0311 21:34:33.597772   70458 addons.go:234] Setting addon metrics-server=true in "no-preload-324578"
	W0311 21:34:33.597780   70458 addons.go:243] addon storage-provisioner should already be in state true
	W0311 21:34:33.597795   70458 addons.go:243] addon metrics-server should already be in state true
	I0311 21:34:33.597828   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597838   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597733   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:33.598079   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598110   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598224   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598260   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598305   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598269   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.613473   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0311 21:34:33.613994   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.614558   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.614576   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.614946   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.615385   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.615415   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.618026   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0311 21:34:33.618201   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0311 21:34:33.618370   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618497   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618818   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618833   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.618978   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618989   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.619157   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619343   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.619389   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619926   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.619956   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.623211   70458 addons.go:234] Setting addon default-storageclass=true in "no-preload-324578"
	W0311 21:34:33.623232   70458 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:34:33.623260   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.623634   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.623660   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.635263   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I0311 21:34:33.635575   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.636071   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.636080   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.636462   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.636606   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.638520   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.640583   70458 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:34:33.642029   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:34:33.642045   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:34:33.642058   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.640562   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0311 21:34:33.641020   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0311 21:34:33.642572   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.643082   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.643107   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.643432   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.644002   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.644030   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.644213   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.644711   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.644733   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.645120   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.645334   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.645406   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.645861   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.645888   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.646042   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.646332   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.646548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.646719   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.646986   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.648681   70458 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:30.659466   70604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.244884989s)
	I0311 21:34:30.659492   70604 crio.go:451] duration metric: took 3.244983149s to extract the tarball
	I0311 21:34:30.659500   70604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:30.708661   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:30.769502   70604 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:34:30.769530   70604 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:34:30.769540   70604 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.28.4 crio true true} ...
	I0311 21:34:30.769675   70604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-743937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:30.769757   70604 ssh_runner.go:195] Run: crio config
	I0311 21:34:30.820223   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:30.820251   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:30.820267   70604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:30.820296   70604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-743937 NodeName:embed-certs-743937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:30.820475   70604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-743937"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:30.820563   70604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:34:30.833086   70604 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:30.833175   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:30.844335   70604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0311 21:34:30.863586   70604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:30.883598   70604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0311 21:34:30.904711   70604 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:30.909433   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:30.924054   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:31.064573   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:31.096931   70604 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937 for IP: 192.168.50.114
	I0311 21:34:31.096960   70604 certs.go:194] generating shared ca certs ...
	I0311 21:34:31.096980   70604 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:31.097157   70604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:31.097220   70604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:31.097236   70604 certs.go:256] generating profile certs ...
	I0311 21:34:31.097368   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/client.key
	I0311 21:34:31.097453   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key.c230aed9
	I0311 21:34:31.097520   70604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key
	I0311 21:34:31.097660   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:31.097709   70604 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:31.097770   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:31.097826   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:31.097867   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:31.097899   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:31.097958   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:31.098771   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:31.135109   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:31.173483   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:31.215059   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:31.253244   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0311 21:34:31.305450   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:31.340238   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:31.366993   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:31.393936   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:31.420998   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:31.446500   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:31.474047   70604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:31.493935   70604 ssh_runner.go:195] Run: openssl version
	I0311 21:34:31.500607   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:31.513874   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519255   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519303   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.525967   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:31.538995   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:31.551625   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557235   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557292   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.563658   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:31.576689   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:31.589299   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594405   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594453   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.601041   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:31.619307   70604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:31.624565   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:31.632121   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:31.638843   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:31.646400   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:31.652701   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:31.659661   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:31.666390   70604 kubeadm.go:391] StartCluster: {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:31.666496   70604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:31.666546   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.716714   70604 cri.go:89] found id: ""
	I0311 21:34:31.716796   70604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:31.733945   70604 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:31.733967   70604 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:31.733974   70604 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:31.734019   70604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:31.746543   70604 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:31.747720   70604 kubeconfig.go:125] found "embed-certs-743937" server: "https://192.168.50.114:8443"
	I0311 21:34:31.749670   70604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:31.762374   70604 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0311 21:34:31.762401   70604 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:31.762410   70604 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:31.762462   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.811965   70604 cri.go:89] found id: ""
	I0311 21:34:31.812055   70604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:31.836539   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:31.849272   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:31.849295   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:31.849348   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:31.861345   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:31.861423   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:31.875436   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:31.887183   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:31.887251   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:31.900032   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.911614   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:31.911690   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.924791   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:31.937131   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:31.937204   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:31.949123   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:31.960234   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.089622   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.806370   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.033263   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.135981   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.248827   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:33.248917   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.749207   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.650190   70458 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:33.650207   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:34:33.650223   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.653451   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.653895   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.653920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.654131   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.654302   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.654472   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.654631   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.689121   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0311 21:34:33.689487   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.693084   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.693105   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.693596   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.693796   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.696074   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.696629   70458 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:33.696644   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:34:33.696662   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.699920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700323   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.700342   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700564   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.700756   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.700859   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.700932   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.896331   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:33.969322   70458 node_ready.go:35] waiting up to 6m0s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:34.037114   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:34.059051   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:34:34.059080   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:34:34.094822   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:34.142231   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:34:34.142259   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:34:34.218979   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:34.219002   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:34:34.260381   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:35.648210   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.61103949s)
	I0311 21:34:35.648241   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.553388189s)
	I0311 21:34:35.648344   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648381   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648367   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648409   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648658   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.648675   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.648685   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648694   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648754   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.648997   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.649019   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650050   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650068   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650091   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.650101   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.650367   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650384   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.658738   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.658764   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.658991   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.659007   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687393   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.426969773s)
	I0311 21:34:35.687453   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687467   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687771   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.687810   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687828   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687848   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687831   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688142   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688164   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.688178   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.688214   70458 addons.go:470] Verifying addon metrics-server=true in "no-preload-324578"
	I0311 21:34:35.690413   70458 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:34:31.488010   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:31.488449   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:31.488471   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:31.488421   71714 retry.go:31] will retry after 2.325869089s: waiting for machine to come up
	I0311 21:34:33.815568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:33.816215   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:33.816236   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:33.816176   71714 retry.go:31] will retry after 2.457084002s: waiting for machine to come up
	I0311 21:34:34.249462   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.749177   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.778830   70604 api_server.go:72] duration metric: took 1.530004395s to wait for apiserver process to appear ...
	I0311 21:34:34.778858   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:34.778879   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:34.779469   70604 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0311 21:34:35.279027   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.110193   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.110221   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.110234   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.159861   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.159909   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.279045   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.289460   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.289491   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:38.779423   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.785174   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.785206   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.278910   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.290017   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:39.290054   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.779616   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.786362   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:34:39.794557   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:34:39.794583   70604 api_server.go:131] duration metric: took 5.01571788s to wait for apiserver health ...
	I0311 21:34:39.794594   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:39.794601   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:39.796063   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:35.691844   70458 addons.go:505] duration metric: took 2.097304232s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:34:35.974533   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:37.983073   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:38.977713   70458 node_ready.go:49] node "no-preload-324578" has status "Ready":"True"
	I0311 21:34:38.977738   70458 node_ready.go:38] duration metric: took 5.008382488s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:38.977749   70458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:38.986414   70458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993430   70458 pod_ready.go:92] pod "coredns-76f75df574-s6lsb" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:38.993454   70458 pod_ready.go:81] duration metric: took 7.012539ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993465   70458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:36.274640   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:36.275119   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:36.275157   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:36.275064   71714 retry.go:31] will retry after 3.618026102s: waiting for machine to come up
	I0311 21:34:39.894877   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:39.895397   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:39.895447   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:39.895343   71714 retry.go:31] will retry after 3.826847061s: waiting for machine to come up
	I0311 21:34:39.797420   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:39.810877   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:39.836773   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:39.852496   70604 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:39.852541   70604 system_pods.go:61] "coredns-5dd5756b68-czng9" [a57d0643-36c5-44e2-a113-de051d0e0408] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:39.852556   70604 system_pods.go:61] "etcd-embed-certs-743937" [9f0051e8-247f-4968-a834-c38c5f0c4407] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:39.852567   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [4ac979a6-1906-4a58-9d41-9587d66d81ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:39.852578   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [263ba100-e911-4857-a973-c4dc9312a653] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:39.852591   70604 system_pods.go:61] "kube-proxy-n2qzt" [21f56cfb-a3f5-4c4b-993d-53b6d8f60ec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:34:39.852600   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [0121fa4d-91a8-432b-9f21-c6e8c0b33872] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:39.852606   70604 system_pods.go:61] "metrics-server-57f55c9bc5-7qw98" [3d3f2e87-2e36-4ca3-b31c-fc5f38251f03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:39.852617   70604 system_pods.go:61] "storage-provisioner" [72fd13c7-1a79-4e8a-bdc2-f45117599d85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:34:39.852624   70604 system_pods.go:74] duration metric: took 15.823708ms to wait for pod list to return data ...
	I0311 21:34:39.852634   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:39.856288   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:39.856309   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:39.856317   70604 node_conditions.go:105] duration metric: took 3.676347ms to run NodePressure ...
	I0311 21:34:39.856331   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:40.103882   70604 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108726   70604 kubeadm.go:733] kubelet initialised
	I0311 21:34:40.108758   70604 kubeadm.go:734] duration metric: took 4.847245ms waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108768   70604 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:40.115566   70604 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:42.124435   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.026187   70417 start.go:364] duration metric: took 58.09976601s to acquireMachinesLock for "default-k8s-diff-port-766430"
	I0311 21:34:45.026231   70417 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:45.026242   70417 fix.go:54] fixHost starting: 
	I0311 21:34:45.026632   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:45.026661   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:45.046341   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0311 21:34:45.046779   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:45.047336   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:34:45.047375   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:45.047741   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:45.047920   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:34:45.048090   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:34:45.049581   70417 fix.go:112] recreateIfNeeded on default-k8s-diff-port-766430: state=Stopped err=<nil>
	I0311 21:34:45.049605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	W0311 21:34:45.049759   70417 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:45.051505   70417 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-766430" ...
	I0311 21:34:41.001474   70458 pod_ready.go:102] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:43.500991   70458 pod_ready.go:92] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.501018   70458 pod_ready.go:81] duration metric: took 4.507545237s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.501030   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506732   70458 pod_ready.go:92] pod "kube-apiserver-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.506753   70458 pod_ready.go:81] duration metric: took 5.714866ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506764   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511432   70458 pod_ready.go:92] pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.511456   70458 pod_ready.go:81] duration metric: took 4.684021ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511469   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516333   70458 pod_ready.go:92] pod "kube-proxy-rmz4b" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.516360   70458 pod_ready.go:81] duration metric: took 4.882955ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516370   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521501   70458 pod_ready.go:92] pod "kube-scheduler-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.521524   70458 pod_ready.go:81] duration metric: took 5.146945ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521532   70458 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.723851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724335   70908 main.go:141] libmachine: (old-k8s-version-239315) Found IP for machine: 192.168.72.52
	I0311 21:34:43.724367   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserving static IP address...
	I0311 21:34:43.724382   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has current primary IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724722   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.724759   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserved static IP address: 192.168.72.52
	I0311 21:34:43.724774   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | skip adding static IP to network mk-old-k8s-version-239315 - found existing host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"}
	I0311 21:34:43.724797   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Getting to WaitForSSH function...
	I0311 21:34:43.724815   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting for SSH to be available...
	I0311 21:34:43.727015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727330   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.727354   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727541   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH client type: external
	I0311 21:34:43.727568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa (-rw-------)
	I0311 21:34:43.727624   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:43.727641   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | About to run SSH command:
	I0311 21:34:43.727651   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | exit 0
	I0311 21:34:43.848884   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:43.849287   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:34:43.850084   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:43.852942   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853529   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.853572   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853801   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:34:43.854001   70908 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:43.854024   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:43.854255   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.856623   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857153   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.857187   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857321   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.857516   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857702   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857897   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.858105   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.858332   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.858349   70908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:43.961617   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:43.961664   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.961921   70908 buildroot.go:166] provisioning hostname "old-k8s-version-239315"
	I0311 21:34:43.961945   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.962134   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.964672   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.964987   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.965015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.965122   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.965305   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965466   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965591   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.965801   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.966042   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.966055   70908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239315 && echo "old-k8s-version-239315" | sudo tee /etc/hostname
	I0311 21:34:44.088097   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239315
	
	I0311 21:34:44.088126   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.090911   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091167   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.091205   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091347   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.091524   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091680   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091818   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.091984   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.092185   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.092205   70908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239315/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:44.207643   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:44.207674   70908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:44.207693   70908 buildroot.go:174] setting up certificates
	I0311 21:34:44.207701   70908 provision.go:84] configureAuth start
	I0311 21:34:44.207710   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:44.207975   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:44.211160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211556   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.211588   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211754   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.214211   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214553   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.214585   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214732   70908 provision.go:143] copyHostCerts
	I0311 21:34:44.214797   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:44.214813   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:44.214886   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:44.214991   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:44.215005   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:44.215035   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:44.215160   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:44.215171   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:44.215198   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:44.215267   70908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239315 san=[127.0.0.1 192.168.72.52 localhost minikube old-k8s-version-239315]
	I0311 21:34:44.305250   70908 provision.go:177] copyRemoteCerts
	I0311 21:34:44.305329   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:44.305367   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.308244   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308636   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.308673   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308874   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.309092   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.309290   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.309446   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.394958   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:44.423314   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 21:34:44.459338   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:44.491201   70908 provision.go:87] duration metric: took 283.487383ms to configureAuth
	I0311 21:34:44.491232   70908 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:44.491419   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:34:44.491484   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.494039   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494476   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.494509   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494638   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.494830   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.494998   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.495175   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.495366   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.495548   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.495570   70908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:44.787935   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:44.787961   70908 machine.go:97] duration metric: took 933.945971ms to provisionDockerMachine
	I0311 21:34:44.787971   70908 start.go:293] postStartSetup for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:34:44.787983   70908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:44.788007   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:44.788327   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:44.788355   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.791133   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791460   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.791492   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791637   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.791858   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.792021   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.792165   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.877163   70908 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:44.882141   70908 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:44.882164   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:44.882241   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:44.882330   70908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:44.882442   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:44.894699   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:44.919809   70908 start.go:296] duration metric: took 131.8264ms for postStartSetup
	I0311 21:34:44.919848   70908 fix.go:56] duration metric: took 21.376188092s for fixHost
	I0311 21:34:44.919867   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.922414   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922708   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.922738   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922876   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.923075   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923274   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923455   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.923618   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.923806   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.923831   70908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:45.026068   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192885.004450463
	
	I0311 21:34:45.026088   70908 fix.go:216] guest clock: 1710192885.004450463
	I0311 21:34:45.026096   70908 fix.go:229] Guest: 2024-03-11 21:34:45.004450463 +0000 UTC Remote: 2024-03-11 21:34:44.919851167 +0000 UTC m=+283.922086595 (delta=84.599296ms)
	I0311 21:34:45.026118   70908 fix.go:200] guest clock delta is within tolerance: 84.599296ms
	I0311 21:34:45.026124   70908 start.go:83] releasing machines lock for "old-k8s-version-239315", held for 21.482500591s
	I0311 21:34:45.026158   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.026440   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:45.029366   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029778   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.029813   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029992   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030514   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030711   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030800   70908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:45.030846   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.030946   70908 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:45.030971   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.033851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.033989   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034264   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034292   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034324   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034348   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034429   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034618   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034633   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034799   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034814   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034979   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034977   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.035143   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.135748   70908 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:45.142408   70908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:45.297445   70908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:45.304482   70908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:45.304552   70908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:45.322754   70908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:45.322775   70908 start.go:494] detecting cgroup driver to use...
	I0311 21:34:45.322832   70908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:45.345988   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:45.363267   70908 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:45.363320   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:45.380892   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:45.396972   70908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:45.531640   70908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:45.700243   70908 docker.go:233] disabling docker service ...
	I0311 21:34:45.700306   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:45.730542   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:45.749068   70908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:45.903721   70908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:46.045122   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:46.065278   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:46.090726   70908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:34:46.090779   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.105783   70908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:46.105841   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.121702   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.136262   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.150628   70908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:46.163771   70908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:46.175613   70908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:46.175675   70908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:46.193848   70908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:46.205694   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:46.344832   70908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:46.501773   70908 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:46.501851   70908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:46.507932   70908 start.go:562] Will wait 60s for crictl version
	I0311 21:34:46.507988   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:46.512337   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:46.555165   70908 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:46.555249   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.588554   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.623785   70908 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:34:44.627149   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.128405   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.052882   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Start
	I0311 21:34:45.053039   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring networks are active...
	I0311 21:34:45.053710   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network default is active
	I0311 21:34:45.054156   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network mk-default-k8s-diff-port-766430 is active
	I0311 21:34:45.054499   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Getting domain xml...
	I0311 21:34:45.055347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Creating domain...
	I0311 21:34:46.378216   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting to get IP...
	I0311 21:34:46.379054   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379376   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379485   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.379392   71893 retry.go:31] will retry after 242.915621ms: waiting for machine to come up
	I0311 21:34:46.623729   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624348   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624375   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.624304   71893 retry.go:31] will retry after 274.237436ms: waiting for machine to come up
	I0311 21:34:46.899864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.900296   71893 retry.go:31] will retry after 333.693752ms: waiting for machine to come up
	I0311 21:34:47.235751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236278   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236309   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.236220   71893 retry.go:31] will retry after 513.728994ms: waiting for machine to come up
	I0311 21:34:47.752081   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752622   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.752553   71893 retry.go:31] will retry after 575.202217ms: waiting for machine to come up
	I0311 21:34:48.329095   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329524   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329557   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:48.329477   71893 retry.go:31] will retry after 741.05703ms: waiting for machine to come up
	I0311 21:34:49.072641   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073163   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073195   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.073101   71893 retry.go:31] will retry after 802.911807ms: waiting for machine to come up
	I0311 21:34:45.528876   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.530391   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:49.530451   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:46.625154   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:46.627732   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628080   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:46.628102   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628304   70908 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:46.633367   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:46.649537   70908 kubeadm.go:877] updating cluster {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:46.649677   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:34:46.649733   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:46.699194   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:46.699264   70908 ssh_runner.go:195] Run: which lz4
	I0311 21:34:46.703944   70908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:46.709224   70908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:46.709258   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:34:48.747926   70908 crio.go:444] duration metric: took 2.044006932s to copy over tarball
	I0311 21:34:48.747994   70908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:49.629334   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:51.122454   70604 pod_ready.go:92] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:51.122481   70604 pod_ready.go:81] duration metric: took 11.006878828s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:51.122494   70604 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.227971   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.228001   70604 pod_ready.go:81] duration metric: took 1.105498501s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.228014   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234804   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.234834   70604 pod_ready.go:81] duration metric: took 6.811865ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234854   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241448   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.241473   70604 pod_ready.go:81] duration metric: took 6.611927ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241486   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249614   70604 pod_ready.go:92] pod "kube-proxy-n2qzt" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.249648   70604 pod_ready.go:81] duration metric: took 8.154372ms for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249661   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139924   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:53.139951   70604 pod_ready.go:81] duration metric: took 890.27792ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139961   70604 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:49.877965   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878438   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878460   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.878397   71893 retry.go:31] will retry after 1.163030899s: waiting for machine to come up
	I0311 21:34:51.042660   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043181   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043210   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:51.043131   71893 retry.go:31] will retry after 1.225509553s: waiting for machine to come up
	I0311 21:34:52.269779   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270321   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270358   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:52.270250   71893 retry.go:31] will retry after 2.091046831s: waiting for machine to come up
	I0311 21:34:54.363231   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363664   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363693   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:54.363618   71893 retry.go:31] will retry after 1.759309864s: waiting for machine to come up
	I0311 21:34:52.031032   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:54.529537   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:52.300295   70908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55227284s)
	I0311 21:34:52.300322   70908 crio.go:451] duration metric: took 3.552370125s to extract the tarball
	I0311 21:34:52.300331   70908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:52.349405   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:52.395791   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:52.395821   70908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:52.395892   70908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.395955   70908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.396002   70908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.396010   70908 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.395959   70908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.395932   70908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.395921   70908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:34:52.395974   70908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397721   70908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.397760   70908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.397767   70908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.397768   70908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.397762   70908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397804   70908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.398008   70908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:34:52.398129   70908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.548255   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.549300   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.560293   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.564094   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.564433   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.569516   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.578251   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:34:52.674385   70908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:34:52.674427   70908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.674475   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.725602   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.741797   70908 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:34:52.741840   70908 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.741882   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.793195   70908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:34:52.793239   70908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.793278   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798118   70908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:34:52.798174   70908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.798220   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798241   70908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:34:52.798277   70908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.798312   70908 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:34:52.798333   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798285   70908 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:34:52.798378   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.798399   70908 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.798434   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798336   70908 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:34:52.798510   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.957658   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.957712   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.957765   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.957816   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.957846   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:34:52.957904   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:34:52.957925   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:53.106649   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:34:53.106699   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:34:53.106913   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:34:53.107837   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:34:53.116024   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:34:53.122060   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:34:53.122118   70908 cache_images.go:92] duration metric: took 726.282306ms to LoadCachedImages
	W0311 21:34:53.122205   70908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0311 21:34:53.122224   70908 kubeadm.go:928] updating node { 192.168.72.52 8443 v1.20.0 crio true true} ...
	I0311 21:34:53.122341   70908 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239315 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:53.122443   70908 ssh_runner.go:195] Run: crio config
	I0311 21:34:53.192161   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:34:53.192191   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:53.192211   70908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:53.192233   70908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239315 NodeName:old-k8s-version-239315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:34:53.192405   70908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239315"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:53.192476   70908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:34:53.203965   70908 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:53.204019   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:53.215221   70908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0311 21:34:53.235943   70908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:53.255383   70908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0311 21:34:53.276634   70908 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:53.281778   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:53.298479   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:53.450052   70908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:53.472459   70908 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315 for IP: 192.168.72.52
	I0311 21:34:53.472480   70908 certs.go:194] generating shared ca certs ...
	I0311 21:34:53.472524   70908 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:53.472676   70908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:53.472728   70908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:53.472771   70908 certs.go:256] generating profile certs ...
	I0311 21:34:53.472883   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key
	I0311 21:34:53.472954   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1
	I0311 21:34:53.473013   70908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key
	I0311 21:34:53.473143   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:53.473185   70908 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:53.473198   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:53.473237   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:53.473272   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:53.473307   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:53.473363   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:53.473988   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:53.527429   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:53.575908   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:53.622438   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:53.665366   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 21:34:53.702121   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0311 21:34:53.746066   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:53.779151   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:53.813286   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:53.847058   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:53.882261   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:53.912444   70908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:53.932592   70908 ssh_runner.go:195] Run: openssl version
	I0311 21:34:53.939200   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:53.955630   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960866   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960920   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.967258   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:53.981075   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:53.995065   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000196   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000272   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.008574   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:54.022782   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:54.037409   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042893   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042965   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.049497   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:54.062597   70908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:54.067971   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:54.074746   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:54.081323   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:54.088762   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:54.095529   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:54.102396   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:54.109553   70908 kubeadm.go:391] StartCluster: {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:54.109639   70908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:54.109689   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.152063   70908 cri.go:89] found id: ""
	I0311 21:34:54.152143   70908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:54.163988   70908 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:54.164005   70908 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:54.164011   70908 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:54.164050   70908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:54.175616   70908 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:54.176779   70908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:54.177542   70908 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-11004/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239315" cluster setting kubeconfig missing "old-k8s-version-239315" context setting]
	I0311 21:34:54.178649   70908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:54.180405   70908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:54.191864   70908 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.52
	I0311 21:34:54.191891   70908 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:54.191903   70908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:54.191948   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.233779   70908 cri.go:89] found id: ""
	I0311 21:34:54.233852   70908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:54.253672   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:54.266010   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:54.266038   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:54.266085   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:54.277867   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:54.277918   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:54.288984   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:54.300133   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:54.300197   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:54.312090   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.323997   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:54.324059   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.337225   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:54.348223   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:54.348266   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:54.359245   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:54.370003   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:54.525972   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.408437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.676995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.819933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.913736   70908 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:55.913811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:55.147500   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:57.148276   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.124678   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125150   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125183   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:56.125101   71893 retry.go:31] will retry after 2.284226205s: waiting for machine to come up
	I0311 21:34:58.412391   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.412973   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.413002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:58.412923   71893 retry.go:31] will retry after 4.532871869s: waiting for machine to come up
	I0311 21:34:57.031683   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:59.032261   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:56.914753   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.413928   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.914123   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.413931   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.914199   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.414205   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.913880   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.914121   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.148774   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.646997   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:03.647990   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:02.948316   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948762   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948790   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:35:02.948704   71893 retry.go:31] will retry after 4.885152649s: waiting for machine to come up
	I0311 21:35:01.529589   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:04.028860   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.414003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:01.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.913977   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.414740   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.914735   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.414726   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.914846   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.914715   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.648516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:08.147744   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:07.835002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.835551   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Found IP for machine: 192.168.61.11
	I0311 21:35:07.835585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserving static IP address...
	I0311 21:35:07.835601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has current primary IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.836026   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.836055   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | skip adding static IP to network mk-default-k8s-diff-port-766430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"}
	I0311 21:35:07.836075   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserved static IP address: 192.168.61.11
	I0311 21:35:07.836110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Getting to WaitForSSH function...
	I0311 21:35:07.836125   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for SSH to be available...
	I0311 21:35:07.838230   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.838631   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838757   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH client type: external
	I0311 21:35:07.838784   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa (-rw-------)
	I0311 21:35:07.838830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:35:07.838871   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | About to run SSH command:
	I0311 21:35:07.838897   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | exit 0
	I0311 21:35:07.968765   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | SSH cmd err, output: <nil>: 
	I0311 21:35:07.969119   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetConfigRaw
	I0311 21:35:07.969756   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:07.972490   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.972921   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.972949   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.973180   70417 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/config.json ...
	I0311 21:35:07.973362   70417 machine.go:94] provisionDockerMachine start ...
	I0311 21:35:07.973381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:07.973582   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:07.975926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.976277   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976419   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:07.976566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976704   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976847   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:07.976991   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:07.977161   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:07.977171   70417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:35:08.093841   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:35:08.093864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094076   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:35:08.094100   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094329   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.097134   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097498   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.097528   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097670   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.097854   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098021   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.098409   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.098642   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.098657   70417 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766430 && echo "default-k8s-diff-port-766430" | sudo tee /etc/hostname
	I0311 21:35:08.233860   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766430
	
	I0311 21:35:08.233890   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.236977   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237387   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.237408   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237596   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.237791   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.237962   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.238194   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.238359   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.238515   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.238532   70417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:35:08.363393   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:35:08.363419   70417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:35:08.363471   70417 buildroot.go:174] setting up certificates
	I0311 21:35:08.363484   70417 provision.go:84] configureAuth start
	I0311 21:35:08.363497   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.363780   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:08.366605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.366990   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.367012   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.367139   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.369314   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369650   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.369676   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369798   70417 provision.go:143] copyHostCerts
	I0311 21:35:08.369853   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:35:08.369863   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:35:08.369915   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:35:08.370005   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:35:08.370013   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:35:08.370032   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:35:08.370091   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:35:08.370098   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:35:08.370114   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:35:08.370169   70417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766430 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-766430 localhost minikube]
	I0311 21:35:08.542469   70417 provision.go:177] copyRemoteCerts
	I0311 21:35:08.542529   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:35:08.542550   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.545388   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545750   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.545782   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545958   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.546115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.546264   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.546360   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:08.635866   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:35:08.667490   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0311 21:35:08.697944   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:35:08.726836   70417 provision.go:87] duration metric: took 363.34159ms to configureAuth
	I0311 21:35:08.726860   70417 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:35:08.727033   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:35:08.727115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.730050   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730458   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.730489   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730788   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.730987   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731170   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731317   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.731466   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.731607   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.731629   70417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:35:09.035100   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:35:09.035129   70417 machine.go:97] duration metric: took 1.061753229s to provisionDockerMachine
	I0311 21:35:09.035142   70417 start.go:293] postStartSetup for "default-k8s-diff-port-766430" (driver="kvm2")
	I0311 21:35:09.035151   70417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:35:09.035165   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.035458   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:35:09.035484   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.038340   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038638   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.038668   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038829   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.039027   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.039178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.039343   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.133013   70417 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:35:09.138043   70417 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:35:09.138065   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:35:09.138166   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:35:09.138259   70417 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:35:09.138364   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:35:09.149527   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:09.176424   70417 start.go:296] duration metric: took 141.271199ms for postStartSetup
	I0311 21:35:09.176460   70417 fix.go:56] duration metric: took 24.15021813s for fixHost
	I0311 21:35:09.176479   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.179447   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.179830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.179859   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.180147   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.180402   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180758   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.180974   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:09.181186   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:09.181200   70417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:35:09.297740   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192909.282566583
	
	I0311 21:35:09.297764   70417 fix.go:216] guest clock: 1710192909.282566583
	I0311 21:35:09.297773   70417 fix.go:229] Guest: 2024-03-11 21:35:09.282566583 +0000 UTC Remote: 2024-03-11 21:35:09.176465496 +0000 UTC m=+364.839103648 (delta=106.101087ms)
	I0311 21:35:09.297795   70417 fix.go:200] guest clock delta is within tolerance: 106.101087ms
	I0311 21:35:09.297802   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 24.271590337s
	I0311 21:35:09.297825   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.298067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:09.300989   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301399   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.301422   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301604   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302091   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302291   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302385   70417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:35:09.302433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.302490   70417 ssh_runner.go:195] Run: cat /version.json
	I0311 21:35:09.302515   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.305403   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305572   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305802   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.305831   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305912   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306042   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.306223   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306430   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.306511   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306645   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306772   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:06.528726   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.029055   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.419852   70417 ssh_runner.go:195] Run: systemctl --version
	I0311 21:35:09.427141   70417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:35:09.579321   70417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:35:09.586396   70417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:35:09.586470   70417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:35:09.606617   70417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:35:09.606639   70417 start.go:494] detecting cgroup driver to use...
	I0311 21:35:09.606705   70417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:35:09.627066   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:35:09.646091   70417 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:35:09.646151   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:35:09.662307   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:35:09.679793   70417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:35:09.828827   70417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:35:09.984773   70417 docker.go:233] disabling docker service ...
	I0311 21:35:09.984843   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:35:10.003968   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:35:10.018609   70417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:35:10.174297   70417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:35:10.316762   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:35:10.338008   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:35:10.359320   70417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:35:10.359374   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.371953   70417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:35:10.372008   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.384823   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.397305   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.409521   70417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:35:10.424714   70417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:35:10.438470   70417 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:35:10.438529   70417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:35:10.454436   70417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:35:10.465004   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:10.611379   70417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:35:10.786860   70417 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:35:10.786959   70417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:35:10.792496   70417 start.go:562] Will wait 60s for crictl version
	I0311 21:35:10.792551   70417 ssh_runner.go:195] Run: which crictl
	I0311 21:35:10.797079   70417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:35:10.837010   70417 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:35:10.837086   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.868308   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.900087   70417 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:35:06.414389   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:06.914233   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.414565   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.914773   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.414348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.914743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.413987   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.914698   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.150688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:12.648444   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:10.901304   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:10.904103   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904380   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:10.904407   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904557   70417 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0311 21:35:10.909585   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:10.924163   70417 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:35:10.924311   70417 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:35:10.924408   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:10.969555   70417 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:35:10.969623   70417 ssh_runner.go:195] Run: which lz4
	I0311 21:35:10.974054   70417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:35:10.978776   70417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:35:10.978811   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:35:12.893346   70417 crio.go:444] duration metric: took 1.91931676s to copy over tarball
	I0311 21:35:12.893421   70417 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:35:11.031301   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:13.527896   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:11.414320   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:11.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.414529   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.914476   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.414282   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.914426   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.414521   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.914001   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.414839   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.913921   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.648625   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.148688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:15.772070   70417 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.878627154s)
	I0311 21:35:15.772094   70417 crio.go:451] duration metric: took 2.878719213s to extract the tarball
	I0311 21:35:15.772101   70417 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:35:15.818581   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:15.872635   70417 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:35:15.872658   70417 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:35:15.872667   70417 kubeadm.go:928] updating node { 192.168.61.11 8444 v1.28.4 crio true true} ...
	I0311 21:35:15.872823   70417 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-766430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:35:15.872933   70417 ssh_runner.go:195] Run: crio config
	I0311 21:35:15.928776   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:15.928803   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:15.928818   70417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:35:15.928843   70417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766430 NodeName:default-k8s-diff-port-766430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:35:15.929018   70417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:35:15.929090   70417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:35:15.941853   70417 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:35:15.941908   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:35:15.954936   70417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0311 21:35:15.975236   70417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:35:15.994509   70417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0311 21:35:16.014058   70417 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0311 21:35:16.018972   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:16.035169   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:16.160453   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:35:16.182252   70417 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430 for IP: 192.168.61.11
	I0311 21:35:16.182272   70417 certs.go:194] generating shared ca certs ...
	I0311 21:35:16.182286   70417 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:35:16.182419   70417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:35:16.182465   70417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:35:16.182475   70417 certs.go:256] generating profile certs ...
	I0311 21:35:16.182545   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/client.key
	I0311 21:35:16.182601   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key.2c00376c
	I0311 21:35:16.182635   70417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key
	I0311 21:35:16.182754   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:35:16.182783   70417 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:35:16.182789   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:35:16.182823   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:35:16.182844   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:35:16.182867   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:35:16.182901   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:16.183517   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:35:16.231409   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:35:16.277004   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:35:16.315346   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:35:16.352697   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0311 21:35:16.388570   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:35:16.422830   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:35:16.452562   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 21:35:16.480976   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:35:16.507149   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:35:16.535832   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:35:16.566697   70417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:35:16.587454   70417 ssh_runner.go:195] Run: openssl version
	I0311 21:35:16.593880   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:35:16.608197   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613604   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613673   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.620156   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:35:16.632634   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:35:16.646047   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652530   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652591   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.660480   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:35:16.673572   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:35:16.687161   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692589   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692632   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.705471   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:35:16.718251   70417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:35:16.723979   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:35:16.731335   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:35:16.738485   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:35:16.745489   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:35:16.752295   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:35:16.759251   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:35:16.766128   70417 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:35:16.766237   70417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:35:16.766292   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.806418   70417 cri.go:89] found id: ""
	I0311 21:35:16.806478   70417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:35:16.821434   70417 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:35:16.821455   70417 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:35:16.821462   70417 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:35:16.821514   70417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:35:16.835457   70417 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:35:16.836764   70417 kubeconfig.go:125] found "default-k8s-diff-port-766430" server: "https://192.168.61.11:8444"
	I0311 21:35:16.839163   70417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:35:16.850037   70417 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.11
	I0311 21:35:16.850065   70417 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:35:16.850074   70417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:35:16.850117   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.895532   70417 cri.go:89] found id: ""
	I0311 21:35:16.895612   70417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:35:16.913151   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:35:16.927989   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:35:16.928014   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:35:16.928073   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:35:16.939803   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:35:16.939849   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:35:16.950103   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:35:16.960164   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:35:16.960213   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:35:16.970349   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.980056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:35:16.980098   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.990189   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:35:16.999799   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:35:16.999874   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:35:17.010502   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:35:17.021106   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:17.136170   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.044684   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.296278   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.376702   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.473740   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:35:18.473840   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.974894   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.529099   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.755777   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:20.028341   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:16.414018   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:16.914685   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.414894   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.914319   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.414875   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.914338   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.414496   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.914396   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.414731   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.914149   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.648967   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:22.148024   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:19.474609   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.499907   70417 api_server.go:72] duration metric: took 1.026169594s to wait for apiserver process to appear ...
	I0311 21:35:19.499931   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:35:19.499951   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:19.500566   70417 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0311 21:35:20.000807   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.693958   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:35:22.693991   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:35:22.694006   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.772747   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:22.772792   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.000004   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.004763   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.004805   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.500112   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.507209   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.507236   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.000861   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.006793   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:24.006830   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.500264   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.508242   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:35:24.520230   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:35:24.520255   70417 api_server.go:131] duration metric: took 5.020318338s to wait for apiserver health ...
	I0311 21:35:24.520285   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:24.520291   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:24.522151   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:35:22.029963   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.530052   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:21.414126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:21.914012   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.414680   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.414478   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.914770   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.414370   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.914772   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.413991   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.914516   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.149179   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.647134   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.647725   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.523964   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:35:24.538536   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:35:24.583279   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:35:24.594703   70417 system_pods.go:59] 8 kube-system pods found
	I0311 21:35:24.594730   70417 system_pods.go:61] "coredns-5dd5756b68-pkn9d" [ee4de3f7-1044-4dc9-91dc-d9b23493b0bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:35:24.594737   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [96b9327c-f97d-463f-9d1e-3210b4032aab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:35:24.594751   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [fc650f48-2e28-4219-8571-8b6c43891eb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:35:24.594763   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [c7cc5d40-ad56-4132-ab81-3422ffe1d5b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:35:24.594772   70417 system_pods.go:61] "kube-proxy-cggzr" [f6b7fe4e-7d57-4604-b63d-f9890826b659] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:35:24.594784   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [8a156fec-b2f3-46e8-bf0d-0bf291ef8783] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:35:24.594795   70417 system_pods.go:61] "metrics-server-57f55c9bc5-kxl6n" [ac62700b-a39a-480e-841e-852bf3c66e7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:35:24.594805   70417 system_pods.go:61] "storage-provisioner" [a0b03582-0d90-4a7f-919c-0552046edcb5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:35:24.594821   70417 system_pods.go:74] duration metric: took 11.523907ms to wait for pod list to return data ...
	I0311 21:35:24.594830   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:35:24.606500   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:35:24.606529   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:35:24.606546   70417 node_conditions.go:105] duration metric: took 11.711241ms to run NodePressure ...
	I0311 21:35:24.606565   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:24.893361   70417 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899200   70417 kubeadm.go:733] kubelet initialised
	I0311 21:35:24.899225   70417 kubeadm.go:734] duration metric: took 5.837351ms waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899235   70417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:35:24.905858   70417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:26.912640   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.916566   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:27.029381   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:29.529565   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.414267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:26.914876   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.414469   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.914513   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.414924   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.914126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.414526   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.914039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.414305   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.914438   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.147527   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:33.147694   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.413246   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.912878   70417 pod_ready.go:92] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:31.912899   70417 pod_ready.go:81] duration metric: took 7.007017714s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:31.912908   70417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:33.977091   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:32.029295   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:34.529021   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.414610   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.914472   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.414158   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.914169   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.414745   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.914820   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.414071   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.914228   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.414135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.914695   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.148058   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:37.648200   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.422565   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.921304   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.921328   70417 pod_ready.go:81] duration metric: took 5.008411943s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.921340   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927268   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.927284   70417 pod_ready.go:81] duration metric: took 5.936969ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927292   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932540   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.932563   70417 pod_ready.go:81] duration metric: took 5.264737ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932575   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937456   70417 pod_ready.go:92] pod "kube-proxy-cggzr" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.937473   70417 pod_ready.go:81] duration metric: took 4.892276ms for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937480   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942372   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.942390   70417 pod_ready.go:81] duration metric: took 4.902792ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942401   70417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:38.949452   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.531316   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:39.030491   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.414435   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:36.914157   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.414539   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.914811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.414070   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.914303   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.413935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.914135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.414569   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.914106   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.147355   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.148353   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:40.950204   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.950335   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.528874   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:43.530140   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.414404   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:41.914323   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.414215   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.914566   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.414671   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.914658   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.414703   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.913966   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.414045   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.914260   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.648282   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.148247   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:45.449963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.451576   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.029164   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:48.529137   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:46.914821   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.414210   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.914008   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.413884   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.914160   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.414877   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.914379   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.414293   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.913867   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.148585   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.648372   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:49.949667   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.950874   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.953067   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:50.529616   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.030586   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.414582   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:51.914453   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.414668   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.914816   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.414768   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.914592   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.414743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.914307   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.414000   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.914553   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:55.914636   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:55.957434   70908 cri.go:89] found id: ""
	I0311 21:35:55.957459   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.957470   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:55.957477   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:55.957545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:55.995255   70908 cri.go:89] found id: ""
	I0311 21:35:55.995279   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.995290   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:55.995305   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:55.995364   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:56.038893   70908 cri.go:89] found id: ""
	I0311 21:35:56.038916   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.038926   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:56.038933   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:56.038990   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:54.147165   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.148641   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.647841   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.451057   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.950421   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:55.528922   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.029209   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:00.029912   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.081497   70908 cri.go:89] found id: ""
	I0311 21:35:56.081517   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.081528   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:56.081534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:56.081591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:56.120047   70908 cri.go:89] found id: ""
	I0311 21:35:56.120071   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.120079   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:56.120084   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:56.120156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:56.157350   70908 cri.go:89] found id: ""
	I0311 21:35:56.157370   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.157377   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:56.157382   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:56.157433   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:56.198324   70908 cri.go:89] found id: ""
	I0311 21:35:56.198354   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.198374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:56.198381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:56.198437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:56.236579   70908 cri.go:89] found id: ""
	I0311 21:35:56.236608   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.236619   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:56.236691   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:56.236712   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:56.377789   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:35:56.377809   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:56.377825   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:56.449765   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:56.449807   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:56.502417   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:56.502448   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:56.557205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:56.557241   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.073411   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:59.088205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:59.088287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:59.126458   70908 cri.go:89] found id: ""
	I0311 21:35:59.126486   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.126494   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:59.126499   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:59.126555   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:59.197887   70908 cri.go:89] found id: ""
	I0311 21:35:59.197911   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.197919   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:59.197924   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:59.197967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:59.239523   70908 cri.go:89] found id: ""
	I0311 21:35:59.239552   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.239562   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:59.239570   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:59.239642   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:59.280903   70908 cri.go:89] found id: ""
	I0311 21:35:59.280930   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.280940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:59.280947   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:59.281024   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:59.320218   70908 cri.go:89] found id: ""
	I0311 21:35:59.320242   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.320254   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:59.320260   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:59.320314   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:59.361235   70908 cri.go:89] found id: ""
	I0311 21:35:59.361265   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.361276   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:59.361283   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:59.361352   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:59.409477   70908 cri.go:89] found id: ""
	I0311 21:35:59.409503   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.409514   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:59.409522   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:59.409568   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:59.454704   70908 cri.go:89] found id: ""
	I0311 21:35:59.454728   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.454739   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:59.454748   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:59.454767   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:59.525839   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:59.525864   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:59.569577   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:59.569606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:59.628402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:59.628437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.647181   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:59.647208   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:59.731300   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:00.650515   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.146560   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:01.449702   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.950341   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.030569   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:04.529453   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.232458   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:02.246948   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:02.247025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:02.290561   70908 cri.go:89] found id: ""
	I0311 21:36:02.290588   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.290599   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:02.290605   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:02.290659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:02.333788   70908 cri.go:89] found id: ""
	I0311 21:36:02.333814   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.333821   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:02.333826   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:02.333877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:02.375774   70908 cri.go:89] found id: ""
	I0311 21:36:02.375798   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.375806   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:02.375812   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:02.375862   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:02.414741   70908 cri.go:89] found id: ""
	I0311 21:36:02.414781   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.414803   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:02.414810   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:02.414875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:02.456637   70908 cri.go:89] found id: ""
	I0311 21:36:02.456660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.456670   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:02.456677   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:02.456759   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:02.494633   70908 cri.go:89] found id: ""
	I0311 21:36:02.494660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.494670   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:02.494678   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:02.494738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:02.536187   70908 cri.go:89] found id: ""
	I0311 21:36:02.536212   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.536223   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:02.536230   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:02.536291   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:02.574933   70908 cri.go:89] found id: ""
	I0311 21:36:02.574962   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.574973   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:02.574985   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:02.575001   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:02.656610   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:02.656637   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:02.656653   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:02.730514   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:02.730548   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:02.776009   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:02.776041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:02.829792   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:02.829826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.345568   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:05.360082   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:05.360164   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:05.406106   70908 cri.go:89] found id: ""
	I0311 21:36:05.406131   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.406141   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:05.406147   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:05.406203   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:05.449584   70908 cri.go:89] found id: ""
	I0311 21:36:05.449608   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.449617   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:05.449624   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:05.449680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:05.493869   70908 cri.go:89] found id: ""
	I0311 21:36:05.493898   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.493912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:05.493928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:05.493994   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:05.563506   70908 cri.go:89] found id: ""
	I0311 21:36:05.563532   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.563542   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:05.563549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:05.563600   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:05.630140   70908 cri.go:89] found id: ""
	I0311 21:36:05.630165   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.630172   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:05.630177   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:05.630230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:05.675584   70908 cri.go:89] found id: ""
	I0311 21:36:05.675612   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.675623   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:05.675631   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:05.675689   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:05.720521   70908 cri.go:89] found id: ""
	I0311 21:36:05.720548   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.720557   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:05.720563   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:05.720615   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:05.759323   70908 cri.go:89] found id: ""
	I0311 21:36:05.759351   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.759359   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:05.759367   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:05.759379   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:05.801024   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:05.801050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:05.856330   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:05.856356   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.871299   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:05.871324   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:05.950218   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:05.950245   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:05.950259   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:05.148227   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.647389   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:05.950833   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.449548   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.028964   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:09.029396   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.535502   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:08.552152   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:08.552220   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:08.596602   70908 cri.go:89] found id: ""
	I0311 21:36:08.596707   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.596731   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:08.596755   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:08.596820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:08.641091   70908 cri.go:89] found id: ""
	I0311 21:36:08.641119   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.641130   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:08.641137   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:08.641198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:08.684466   70908 cri.go:89] found id: ""
	I0311 21:36:08.684494   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.684503   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:08.684510   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:08.684570   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:08.730899   70908 cri.go:89] found id: ""
	I0311 21:36:08.730924   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.730931   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:08.730937   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:08.730997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:08.775293   70908 cri.go:89] found id: ""
	I0311 21:36:08.775317   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.775324   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:08.775330   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:08.775387   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:08.816098   70908 cri.go:89] found id: ""
	I0311 21:36:08.816126   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.816137   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:08.816144   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:08.816207   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:08.857413   70908 cri.go:89] found id: ""
	I0311 21:36:08.857449   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.857460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:08.857476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:08.857541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:08.898252   70908 cri.go:89] found id: ""
	I0311 21:36:08.898283   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.898293   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:08.898302   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:08.898313   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:08.955162   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:08.955188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:08.970234   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:08.970258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:09.055025   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:09.055043   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:09.055055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:09.140345   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:09.140376   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:10.148323   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.647037   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:10.450796   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.450839   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.529842   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.029706   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.681542   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:11.697407   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:11.697481   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:11.740239   70908 cri.go:89] found id: ""
	I0311 21:36:11.740264   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.740274   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:11.740280   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:11.740336   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:11.777625   70908 cri.go:89] found id: ""
	I0311 21:36:11.777655   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.777667   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:11.777674   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:11.777745   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:11.817202   70908 cri.go:89] found id: ""
	I0311 21:36:11.817226   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.817233   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:11.817239   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:11.817306   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:11.858912   70908 cri.go:89] found id: ""
	I0311 21:36:11.858933   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.858940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:11.858945   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:11.858998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:11.897841   70908 cri.go:89] found id: ""
	I0311 21:36:11.897876   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.897887   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:11.897895   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:11.897955   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:11.936181   70908 cri.go:89] found id: ""
	I0311 21:36:11.936207   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.936218   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:11.936226   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:11.936293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:11.981882   70908 cri.go:89] found id: ""
	I0311 21:36:11.981905   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.981915   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:11.981922   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:11.981982   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:12.022270   70908 cri.go:89] found id: ""
	I0311 21:36:12.022298   70908 logs.go:276] 0 containers: []
	W0311 21:36:12.022309   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:12.022320   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:12.022333   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:12.074640   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:12.074668   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:12.089854   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:12.089879   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:12.179578   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:12.179595   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:12.179606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:12.263249   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:12.263285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:14.811547   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:14.827075   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:14.827175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:14.870512   70908 cri.go:89] found id: ""
	I0311 21:36:14.870544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.870555   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:14.870563   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:14.870625   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:14.908521   70908 cri.go:89] found id: ""
	I0311 21:36:14.908544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.908553   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:14.908558   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:14.908607   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:14.951702   70908 cri.go:89] found id: ""
	I0311 21:36:14.951729   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.951739   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:14.951746   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:14.951805   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:14.992590   70908 cri.go:89] found id: ""
	I0311 21:36:14.992618   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.992630   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:14.992638   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:14.992698   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:15.034535   70908 cri.go:89] found id: ""
	I0311 21:36:15.034556   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.034563   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:15.034569   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:15.034614   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:15.077175   70908 cri.go:89] found id: ""
	I0311 21:36:15.077200   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.077210   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:15.077218   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:15.077283   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:15.121500   70908 cri.go:89] found id: ""
	I0311 21:36:15.121530   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.121541   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:15.121549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:15.121655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:15.162712   70908 cri.go:89] found id: ""
	I0311 21:36:15.162738   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.162748   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:15.162757   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:15.162776   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:15.241469   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:15.241488   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:15.241499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:15.322257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:15.322291   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:15.368258   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:15.368285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:15.427131   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:15.427163   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:14.648776   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.148710   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.452948   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.949085   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.950111   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.030409   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.529122   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.944348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:17.958629   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:17.958704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:17.995869   70908 cri.go:89] found id: ""
	I0311 21:36:17.995895   70908 logs.go:276] 0 containers: []
	W0311 21:36:17.995904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:17.995914   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:17.995976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:18.032273   70908 cri.go:89] found id: ""
	I0311 21:36:18.032300   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.032308   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:18.032313   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:18.032361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:18.072497   70908 cri.go:89] found id: ""
	I0311 21:36:18.072519   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.072526   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:18.072532   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:18.072578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:18.110091   70908 cri.go:89] found id: ""
	I0311 21:36:18.110119   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.110129   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:18.110136   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:18.110199   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:18.152217   70908 cri.go:89] found id: ""
	I0311 21:36:18.152261   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.152272   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:18.152280   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:18.152347   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:18.193957   70908 cri.go:89] found id: ""
	I0311 21:36:18.193989   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.194000   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:18.194008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:18.194086   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:18.231828   70908 cri.go:89] found id: ""
	I0311 21:36:18.231861   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.231873   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:18.231880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:18.231939   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:18.271862   70908 cri.go:89] found id: ""
	I0311 21:36:18.271896   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.271907   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:18.271917   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:18.271933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:18.325405   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:18.325440   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:18.344560   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:18.344593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:18.425051   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:18.425075   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:18.425093   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:18.513247   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:18.513287   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:19.646758   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.647702   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.649318   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.450692   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.950088   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.028812   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.029828   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.060499   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:21.076648   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:21.076716   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:21.117270   70908 cri.go:89] found id: ""
	I0311 21:36:21.117298   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.117309   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:21.117317   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:21.117388   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:21.159005   70908 cri.go:89] found id: ""
	I0311 21:36:21.159045   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.159056   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:21.159063   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:21.159122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:21.196576   70908 cri.go:89] found id: ""
	I0311 21:36:21.196599   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.196609   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:21.196617   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:21.196677   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:21.237689   70908 cri.go:89] found id: ""
	I0311 21:36:21.237718   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.237729   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:21.237734   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:21.237783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:21.280662   70908 cri.go:89] found id: ""
	I0311 21:36:21.280696   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.280707   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:21.280714   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:21.280795   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:21.321475   70908 cri.go:89] found id: ""
	I0311 21:36:21.321501   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.321511   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:21.321518   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:21.321581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:21.365186   70908 cri.go:89] found id: ""
	I0311 21:36:21.365209   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.365216   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:21.365221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:21.365276   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:21.408678   70908 cri.go:89] found id: ""
	I0311 21:36:21.408713   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.408725   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:21.408754   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:21.408771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:21.466635   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:21.466663   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:21.482596   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:21.482622   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:21.556750   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:21.556769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:21.556780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:21.643095   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:21.643126   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.195112   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:24.208829   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:24.208895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:24.245956   70908 cri.go:89] found id: ""
	I0311 21:36:24.245981   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.245989   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:24.245995   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:24.246053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:24.289740   70908 cri.go:89] found id: ""
	I0311 21:36:24.289766   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.289778   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:24.289784   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:24.289846   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:24.336911   70908 cri.go:89] found id: ""
	I0311 21:36:24.336963   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.336977   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:24.336986   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:24.337057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:24.381715   70908 cri.go:89] found id: ""
	I0311 21:36:24.381739   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.381753   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:24.381761   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:24.381817   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:24.423759   70908 cri.go:89] found id: ""
	I0311 21:36:24.423787   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.423797   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:24.423805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:24.423882   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:24.468903   70908 cri.go:89] found id: ""
	I0311 21:36:24.468931   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.468946   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:24.468954   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:24.469013   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:24.509602   70908 cri.go:89] found id: ""
	I0311 21:36:24.509629   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.509639   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:24.509646   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:24.509706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:24.551483   70908 cri.go:89] found id: ""
	I0311 21:36:24.551511   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.551522   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:24.551532   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:24.551545   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:24.567123   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:24.567154   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:24.644215   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:24.644247   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:24.644262   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:24.726438   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:24.726469   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.779567   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:24.779596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:26.146823   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.148291   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:26.450637   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.949850   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:25.528542   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.529375   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:29.529701   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.337785   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:27.352504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:27.352578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:27.395787   70908 cri.go:89] found id: ""
	I0311 21:36:27.395809   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.395817   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:27.395823   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:27.395869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:27.441800   70908 cri.go:89] found id: ""
	I0311 21:36:27.441826   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.441834   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:27.441839   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:27.441893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:27.481761   70908 cri.go:89] found id: ""
	I0311 21:36:27.481791   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.481802   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:27.481809   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:27.481868   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:27.526981   70908 cri.go:89] found id: ""
	I0311 21:36:27.527011   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.527029   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:27.527037   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:27.527130   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:27.566569   70908 cri.go:89] found id: ""
	I0311 21:36:27.566602   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.566614   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:27.566622   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:27.566682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:27.607434   70908 cri.go:89] found id: ""
	I0311 21:36:27.607456   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.607464   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:27.607469   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:27.607529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:27.652648   70908 cri.go:89] found id: ""
	I0311 21:36:27.652674   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.652681   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:27.652686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:27.652756   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:27.691105   70908 cri.go:89] found id: ""
	I0311 21:36:27.691136   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.691148   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:27.691158   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:27.691173   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:27.706451   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:27.706477   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:27.788935   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:27.788959   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:27.788975   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:27.875721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:27.875758   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:27.927920   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:27.927951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.487728   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:30.503425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:30.503508   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:30.550846   70908 cri.go:89] found id: ""
	I0311 21:36:30.550868   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.550875   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:30.550881   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:30.550928   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:30.586886   70908 cri.go:89] found id: ""
	I0311 21:36:30.586915   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.586925   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:30.586934   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:30.586991   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:30.627849   70908 cri.go:89] found id: ""
	I0311 21:36:30.627884   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.627895   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:30.627902   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:30.627965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:30.669188   70908 cri.go:89] found id: ""
	I0311 21:36:30.669209   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.669216   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:30.669222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:30.669266   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:30.711676   70908 cri.go:89] found id: ""
	I0311 21:36:30.711697   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.711705   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:30.711710   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:30.711758   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:30.754218   70908 cri.go:89] found id: ""
	I0311 21:36:30.754240   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.754248   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:30.754253   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:30.754299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:30.791224   70908 cri.go:89] found id: ""
	I0311 21:36:30.791255   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.791263   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:30.791269   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:30.791328   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:30.831263   70908 cri.go:89] found id: ""
	I0311 21:36:30.831291   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.831301   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:30.831311   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:30.831326   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:30.876574   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:30.876600   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.928483   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:30.928509   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:30.944642   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:30.944665   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:31.026406   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:31.026428   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:31.026444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:30.648859   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.147907   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:30.952483   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.451714   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:32.028484   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:34.028948   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.611104   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:33.625644   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:33.625706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:33.664787   70908 cri.go:89] found id: ""
	I0311 21:36:33.664816   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.664825   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:33.664830   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:33.664894   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:33.704636   70908 cri.go:89] found id: ""
	I0311 21:36:33.704659   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.704666   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:33.704672   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:33.704717   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:33.744797   70908 cri.go:89] found id: ""
	I0311 21:36:33.744837   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.744848   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:33.744855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:33.744917   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:33.787435   70908 cri.go:89] found id: ""
	I0311 21:36:33.787464   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.787474   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:33.787482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:33.787541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:33.826578   70908 cri.go:89] found id: ""
	I0311 21:36:33.826606   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.826617   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:33.826624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:33.826684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:33.864854   70908 cri.go:89] found id: ""
	I0311 21:36:33.864875   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.864882   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:33.864887   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:33.864934   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:33.905366   70908 cri.go:89] found id: ""
	I0311 21:36:33.905397   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.905409   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:33.905416   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:33.905477   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:33.950196   70908 cri.go:89] found id: ""
	I0311 21:36:33.950222   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.950232   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:33.950243   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:33.950258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:34.001016   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:34.001049   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:34.059102   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:34.059131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:34.075879   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:34.075908   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:34.177114   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:34.177138   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:34.177161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:35.647611   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.147941   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:35.950147   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.449090   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.030072   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.527952   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.756459   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:36.772781   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:36.772867   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:36.820076   70908 cri.go:89] found id: ""
	I0311 21:36:36.820103   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.820111   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:36.820118   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:36.820169   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:36.859279   70908 cri.go:89] found id: ""
	I0311 21:36:36.859306   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.859317   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:36.859324   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:36.859383   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:36.899669   70908 cri.go:89] found id: ""
	I0311 21:36:36.899694   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.899705   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:36.899712   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:36.899770   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:36.938826   70908 cri.go:89] found id: ""
	I0311 21:36:36.938853   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.938864   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:36.938872   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:36.938957   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:36.976659   70908 cri.go:89] found id: ""
	I0311 21:36:36.976685   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.976693   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:36.976703   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:36.976772   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:37.015439   70908 cri.go:89] found id: ""
	I0311 21:36:37.015462   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.015469   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:37.015474   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:37.015519   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:37.057469   70908 cri.go:89] found id: ""
	I0311 21:36:37.057496   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.057507   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:37.057514   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:37.057579   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:37.106287   70908 cri.go:89] found id: ""
	I0311 21:36:37.106316   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.106325   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:37.106335   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:37.106352   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:37.122333   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:37.122367   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:37.197708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:37.197731   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:37.197742   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:37.281911   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:37.281944   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:37.335978   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:37.336011   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:39.891583   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:39.914741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:39.914823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:39.955751   70908 cri.go:89] found id: ""
	I0311 21:36:39.955773   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.955781   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:39.955786   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:39.955837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:39.997604   70908 cri.go:89] found id: ""
	I0311 21:36:39.997632   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.997642   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:39.997649   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:39.997711   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:40.039138   70908 cri.go:89] found id: ""
	I0311 21:36:40.039168   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.039178   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:40.039186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:40.039230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:40.079906   70908 cri.go:89] found id: ""
	I0311 21:36:40.079934   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.079945   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:40.079952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:40.080017   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:40.124116   70908 cri.go:89] found id: ""
	I0311 21:36:40.124141   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.124152   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:40.124159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:40.124221   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:40.165078   70908 cri.go:89] found id: ""
	I0311 21:36:40.165099   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.165108   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:40.165113   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:40.165158   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:40.203928   70908 cri.go:89] found id: ""
	I0311 21:36:40.203954   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.203962   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:40.203971   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:40.204018   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:40.244755   70908 cri.go:89] found id: ""
	I0311 21:36:40.244783   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.244793   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:40.244803   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:40.244819   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:40.302090   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:40.302125   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:40.318071   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:40.318097   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:40.405336   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:40.405363   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:40.405378   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:40.493262   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:40.493298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:40.148095   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.651483   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.449200   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.450259   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.528526   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.533619   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:45.029285   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:43.052419   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:43.068300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:43.068378   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:43.109665   70908 cri.go:89] found id: ""
	I0311 21:36:43.109701   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.109717   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:43.109725   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:43.109789   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:43.152233   70908 cri.go:89] found id: ""
	I0311 21:36:43.152253   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.152260   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:43.152265   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:43.152311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:43.194969   70908 cri.go:89] found id: ""
	I0311 21:36:43.194995   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.195002   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:43.195008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:43.195056   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:43.234555   70908 cri.go:89] found id: ""
	I0311 21:36:43.234581   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.234592   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:43.234597   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:43.234651   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:43.275188   70908 cri.go:89] found id: ""
	I0311 21:36:43.275214   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.275224   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:43.275232   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:43.275287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:43.314481   70908 cri.go:89] found id: ""
	I0311 21:36:43.314507   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.314515   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:43.314521   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:43.314580   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:43.353287   70908 cri.go:89] found id: ""
	I0311 21:36:43.353317   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.353328   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:43.353336   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:43.353395   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:43.396112   70908 cri.go:89] found id: ""
	I0311 21:36:43.396138   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.396150   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:43.396160   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:43.396175   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:43.456116   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:43.456143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:43.472992   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:43.473023   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:43.558281   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:43.558311   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:43.558327   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:43.641849   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:43.641885   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:45.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.147574   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:44.954864   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.450806   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.029669   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.529505   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:46.187444   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:46.202848   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:46.202911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:46.244843   70908 cri.go:89] found id: ""
	I0311 21:36:46.244872   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.244880   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:46.244886   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:46.244933   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:46.297789   70908 cri.go:89] found id: ""
	I0311 21:36:46.297820   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.297831   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:46.297838   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:46.297903   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:46.353104   70908 cri.go:89] found id: ""
	I0311 21:36:46.353127   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.353134   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:46.353140   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:46.353211   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:46.426767   70908 cri.go:89] found id: ""
	I0311 21:36:46.426792   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.426799   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:46.426804   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:46.426858   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:46.469850   70908 cri.go:89] found id: ""
	I0311 21:36:46.469881   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.469891   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:46.469899   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:46.469960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:46.510692   70908 cri.go:89] found id: ""
	I0311 21:36:46.510718   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.510726   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:46.510732   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:46.510787   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:46.554445   70908 cri.go:89] found id: ""
	I0311 21:36:46.554468   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.554475   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:46.554482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:46.554527   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:46.592417   70908 cri.go:89] found id: ""
	I0311 21:36:46.592448   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.592458   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:46.592467   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:46.592480   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:46.607106   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:46.607146   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:46.691556   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:46.691575   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:46.691587   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:46.772468   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:46.772503   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:46.814478   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:46.814512   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.368451   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:49.383504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:49.383573   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:49.427392   70908 cri.go:89] found id: ""
	I0311 21:36:49.427415   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.427426   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:49.427434   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:49.427493   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:49.469022   70908 cri.go:89] found id: ""
	I0311 21:36:49.469044   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.469052   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:49.469059   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:49.469106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:49.510755   70908 cri.go:89] found id: ""
	I0311 21:36:49.510781   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.510792   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:49.510800   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:49.510886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:49.556594   70908 cri.go:89] found id: ""
	I0311 21:36:49.556631   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.556642   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:49.556649   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:49.556710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:49.597035   70908 cri.go:89] found id: ""
	I0311 21:36:49.597059   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.597067   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:49.597072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:49.597138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:49.642947   70908 cri.go:89] found id: ""
	I0311 21:36:49.642975   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.642985   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:49.642993   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:49.643051   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:49.681401   70908 cri.go:89] found id: ""
	I0311 21:36:49.681423   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.681430   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:49.681435   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:49.681478   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:49.718498   70908 cri.go:89] found id: ""
	I0311 21:36:49.718529   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.718539   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:49.718549   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:49.718563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:49.764483   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:49.764515   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.821261   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:49.821293   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:49.837110   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:49.837135   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:49.918507   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:49.918529   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:49.918541   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:49.648198   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.146837   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.450941   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:51.950760   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.030288   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.528831   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.500354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:52.516722   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:52.516811   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:52.563312   70908 cri.go:89] found id: ""
	I0311 21:36:52.563340   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.563354   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:52.563362   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:52.563421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:52.603545   70908 cri.go:89] found id: ""
	I0311 21:36:52.603572   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.603581   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:52.603588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:52.603657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:52.645624   70908 cri.go:89] found id: ""
	I0311 21:36:52.645648   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.645658   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:52.645665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:52.645722   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:52.693335   70908 cri.go:89] found id: ""
	I0311 21:36:52.693363   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.693373   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:52.693380   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:52.693437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:52.740272   70908 cri.go:89] found id: ""
	I0311 21:36:52.740310   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.740331   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:52.740341   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:52.740398   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:52.786241   70908 cri.go:89] found id: ""
	I0311 21:36:52.786276   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.786285   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:52.786291   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:52.786355   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:52.825013   70908 cri.go:89] found id: ""
	I0311 21:36:52.825042   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.825053   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:52.825061   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:52.825117   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:52.862867   70908 cri.go:89] found id: ""
	I0311 21:36:52.862892   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.862901   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:52.862908   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:52.862922   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:52.917005   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:52.917036   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:52.932086   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:52.932112   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:53.012379   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:53.012402   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:53.012413   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:53.096881   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:53.096913   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:55.640142   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:55.656664   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:55.656749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:55.697962   70908 cri.go:89] found id: ""
	I0311 21:36:55.697992   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.698000   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:55.698005   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:55.698059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:55.741888   70908 cri.go:89] found id: ""
	I0311 21:36:55.741910   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.741917   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:55.741921   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:55.741965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:55.779352   70908 cri.go:89] found id: ""
	I0311 21:36:55.779372   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.779381   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:55.779386   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:55.779430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:55.819496   70908 cri.go:89] found id: ""
	I0311 21:36:55.819530   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.819541   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:55.819549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:55.819612   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:55.859384   70908 cri.go:89] found id: ""
	I0311 21:36:55.859412   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.859419   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:55.859424   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:55.859473   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:55.899415   70908 cri.go:89] found id: ""
	I0311 21:36:55.899438   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.899445   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:55.899450   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:55.899496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:55.938595   70908 cri.go:89] found id: ""
	I0311 21:36:55.938625   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.938637   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:55.938645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:55.938710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:55.980064   70908 cri.go:89] found id: ""
	I0311 21:36:55.980089   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.980096   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:55.980103   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:55.980115   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:55.996222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:55.996297   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:36:54.147743   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.150270   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.648829   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.450767   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.949091   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.950443   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.529184   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:59.029323   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	W0311 21:36:56.081046   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:56.081074   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:56.081090   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:56.167748   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:56.167773   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:56.221118   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:56.221150   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:58.772403   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:58.789349   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:58.789421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:58.829945   70908 cri.go:89] found id: ""
	I0311 21:36:58.829974   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.829985   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:58.829993   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:58.830059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:58.877190   70908 cri.go:89] found id: ""
	I0311 21:36:58.877214   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.877224   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:58.877231   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:58.877295   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:58.920086   70908 cri.go:89] found id: ""
	I0311 21:36:58.920113   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.920122   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:58.920128   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:58.920189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:58.956864   70908 cri.go:89] found id: ""
	I0311 21:36:58.956890   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.956900   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:58.956907   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:58.956967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:58.999363   70908 cri.go:89] found id: ""
	I0311 21:36:58.999390   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.999400   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:58.999408   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:58.999469   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:59.041759   70908 cri.go:89] found id: ""
	I0311 21:36:59.041787   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.041797   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:59.041803   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:59.041850   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:59.084378   70908 cri.go:89] found id: ""
	I0311 21:36:59.084406   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.084417   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:59.084425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:59.084479   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:59.124105   70908 cri.go:89] found id: ""
	I0311 21:36:59.124151   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.124163   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:59.124173   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:59.124188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:59.202060   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:59.202083   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:59.202098   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:59.284025   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:59.284060   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:59.327926   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:59.327951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:59.382505   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:59.382533   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:01.147260   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.149020   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.450230   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.949834   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.529173   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.532427   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.900084   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:01.914495   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:01.914552   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:01.956887   70908 cri.go:89] found id: ""
	I0311 21:37:01.956912   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.956922   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:01.956929   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:01.956986   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:01.995358   70908 cri.go:89] found id: ""
	I0311 21:37:01.995385   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.995394   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:01.995399   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:01.995448   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:02.033949   70908 cri.go:89] found id: ""
	I0311 21:37:02.033974   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.033984   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:02.033991   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:02.034049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:02.074348   70908 cri.go:89] found id: ""
	I0311 21:37:02.074372   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.074382   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:02.074390   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:02.074449   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:02.112456   70908 cri.go:89] found id: ""
	I0311 21:37:02.112479   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.112486   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:02.112491   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:02.112554   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:02.155102   70908 cri.go:89] found id: ""
	I0311 21:37:02.155130   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.155138   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:02.155149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:02.155205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:02.191359   70908 cri.go:89] found id: ""
	I0311 21:37:02.191386   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.191393   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:02.191399   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:02.191450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:02.236178   70908 cri.go:89] found id: ""
	I0311 21:37:02.236203   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.236211   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:02.236220   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:02.236231   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:02.285794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:02.285818   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:02.342348   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:02.342387   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:02.357230   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:02.357257   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:02.431044   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:02.431064   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:02.431076   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.019473   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:05.035841   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:05.035901   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:05.082013   70908 cri.go:89] found id: ""
	I0311 21:37:05.082034   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.082041   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:05.082046   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:05.082091   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:05.126236   70908 cri.go:89] found id: ""
	I0311 21:37:05.126257   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.126265   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:05.126270   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:05.126311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:05.170573   70908 cri.go:89] found id: ""
	I0311 21:37:05.170601   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.170608   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:05.170614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:05.170658   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:05.213921   70908 cri.go:89] found id: ""
	I0311 21:37:05.213948   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.213958   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:05.213965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:05.214025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:05.261178   70908 cri.go:89] found id: ""
	I0311 21:37:05.261206   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.261213   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:05.261221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:05.261273   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:05.306007   70908 cri.go:89] found id: ""
	I0311 21:37:05.306037   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.306045   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:05.306051   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:05.306106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:05.346653   70908 cri.go:89] found id: ""
	I0311 21:37:05.346679   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.346688   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:05.346694   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:05.346752   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:05.384587   70908 cri.go:89] found id: ""
	I0311 21:37:05.384626   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.384637   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:05.384648   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:05.384664   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:05.440676   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:05.440709   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:05.456989   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:05.457018   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:05.553900   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:05.553932   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:05.553947   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.633270   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:05.633300   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:05.647077   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.146975   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.449502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.450008   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.028642   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.529826   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.181935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:08.198179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:08.198251   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:08.236484   70908 cri.go:89] found id: ""
	I0311 21:37:08.236506   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.236516   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:08.236524   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:08.236578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:08.277701   70908 cri.go:89] found id: ""
	I0311 21:37:08.277731   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.277739   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:08.277745   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:08.277804   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:08.319559   70908 cri.go:89] found id: ""
	I0311 21:37:08.319585   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.319596   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:08.319604   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:08.319666   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:08.359752   70908 cri.go:89] found id: ""
	I0311 21:37:08.359777   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.359785   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:08.359791   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:08.359849   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:08.397432   70908 cri.go:89] found id: ""
	I0311 21:37:08.397453   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.397460   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:08.397465   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:08.397511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:08.438708   70908 cri.go:89] found id: ""
	I0311 21:37:08.438732   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.438742   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:08.438749   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:08.438807   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:08.479511   70908 cri.go:89] found id: ""
	I0311 21:37:08.479533   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.479560   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:08.479566   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:08.479620   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:08.521634   70908 cri.go:89] found id: ""
	I0311 21:37:08.521659   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.521670   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:08.521680   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:08.521693   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:08.577033   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:08.577065   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:08.592006   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:08.592030   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:08.680862   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:08.680903   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:08.680919   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:08.764991   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:08.765037   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:10.147819   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.648352   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:10.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.949571   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.028245   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:13.028689   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:15.034232   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.313168   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:11.326808   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:11.326876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:11.364223   70908 cri.go:89] found id: ""
	I0311 21:37:11.364246   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.364254   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:11.364259   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:11.364311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:11.401361   70908 cri.go:89] found id: ""
	I0311 21:37:11.401391   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.401402   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:11.401409   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:11.401459   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:11.441927   70908 cri.go:89] found id: ""
	I0311 21:37:11.441950   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.441957   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:11.441962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:11.442015   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:11.480804   70908 cri.go:89] found id: ""
	I0311 21:37:11.480836   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.480847   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:11.480855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:11.480913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:11.520135   70908 cri.go:89] found id: ""
	I0311 21:37:11.520166   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.520177   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:11.520193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:11.520255   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:11.559214   70908 cri.go:89] found id: ""
	I0311 21:37:11.559244   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.559255   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:11.559263   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:11.559322   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:11.597346   70908 cri.go:89] found id: ""
	I0311 21:37:11.597374   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.597383   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:11.597391   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:11.597452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:11.646095   70908 cri.go:89] found id: ""
	I0311 21:37:11.646118   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.646127   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:11.646137   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:11.646167   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:11.691813   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:11.691844   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:11.745270   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:11.745303   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:11.761107   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:11.761131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:11.841033   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:11.841059   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:11.841074   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:14.431709   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:14.447064   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:14.447131   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:14.493094   70908 cri.go:89] found id: ""
	I0311 21:37:14.493132   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.493140   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:14.493146   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:14.493195   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:14.537391   70908 cri.go:89] found id: ""
	I0311 21:37:14.537415   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.537423   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:14.537428   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:14.537487   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:14.576284   70908 cri.go:89] found id: ""
	I0311 21:37:14.576306   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.576313   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:14.576319   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:14.576375   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:14.627057   70908 cri.go:89] found id: ""
	I0311 21:37:14.627086   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.627097   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:14.627105   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:14.627163   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:14.669204   70908 cri.go:89] found id: ""
	I0311 21:37:14.669226   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.669233   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:14.669238   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:14.669293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:14.708787   70908 cri.go:89] found id: ""
	I0311 21:37:14.708812   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.708820   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:14.708826   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:14.708892   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:14.749795   70908 cri.go:89] found id: ""
	I0311 21:37:14.749819   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.749828   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:14.749835   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:14.749893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:14.794871   70908 cri.go:89] found id: ""
	I0311 21:37:14.794900   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.794911   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:14.794922   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:14.794936   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:14.850022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:14.850050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:14.866589   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:14.866618   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:14.968887   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:14.968906   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:14.968921   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:15.047376   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:15.047404   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:14.648528   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:16.649275   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:18.649842   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:14.951387   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.451239   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.529411   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:20.030012   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.599834   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:17.613610   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:17.613665   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:17.655340   70908 cri.go:89] found id: ""
	I0311 21:37:17.655361   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.655369   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:17.655374   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:17.655416   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:17.695071   70908 cri.go:89] found id: ""
	I0311 21:37:17.695103   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.695114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:17.695121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:17.695178   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:17.731914   70908 cri.go:89] found id: ""
	I0311 21:37:17.731938   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.731946   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:17.731952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:17.732012   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:17.768198   70908 cri.go:89] found id: ""
	I0311 21:37:17.768224   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.768236   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:17.768242   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:17.768301   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:17.802881   70908 cri.go:89] found id: ""
	I0311 21:37:17.802909   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.802920   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:17.802928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:17.802983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:17.841660   70908 cri.go:89] found id: ""
	I0311 21:37:17.841684   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.841692   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:17.841698   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:17.841749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:17.880154   70908 cri.go:89] found id: ""
	I0311 21:37:17.880183   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.880196   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:17.880205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:17.880260   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:17.919797   70908 cri.go:89] found id: ""
	I0311 21:37:17.919822   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.919829   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:17.919837   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:17.919847   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:17.976607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:17.976636   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:17.993313   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:17.993339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:18.069928   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:18.069956   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:18.069973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:18.152257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:18.152285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:20.706553   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:20.721148   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:20.721214   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:20.762913   70908 cri.go:89] found id: ""
	I0311 21:37:20.762935   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.762943   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:20.762952   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:20.762997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:20.811120   70908 cri.go:89] found id: ""
	I0311 21:37:20.811147   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.811158   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:20.811165   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:20.811225   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:20.848987   70908 cri.go:89] found id: ""
	I0311 21:37:20.849015   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.849026   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:20.849033   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:20.849098   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:20.896201   70908 cri.go:89] found id: ""
	I0311 21:37:20.896226   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.896233   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:20.896240   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:20.896299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:20.936570   70908 cri.go:89] found id: ""
	I0311 21:37:20.936595   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.936603   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:20.936608   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:20.936657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:20.977535   70908 cri.go:89] found id: ""
	I0311 21:37:20.977565   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.977576   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:20.977584   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:20.977647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:21.015370   70908 cri.go:89] found id: ""
	I0311 21:37:21.015395   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.015405   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:21.015413   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:21.015472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:21.146868   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:23.147272   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:19.950972   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.450298   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.528109   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.530216   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:21.056190   70908 cri.go:89] found id: ""
	I0311 21:37:21.056214   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.056225   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:21.056235   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:21.056255   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:21.112022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:21.112051   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:21.128841   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:21.128872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:21.209690   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:21.209716   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:21.209732   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:21.291064   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:21.291099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:23.844334   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:23.860000   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:23.860061   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:23.899777   70908 cri.go:89] found id: ""
	I0311 21:37:23.899805   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.899814   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:23.899820   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:23.899879   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:23.941510   70908 cri.go:89] found id: ""
	I0311 21:37:23.941537   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.941547   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:23.941555   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:23.941627   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:23.980564   70908 cri.go:89] found id: ""
	I0311 21:37:23.980592   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.980602   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:23.980614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:23.980676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:24.020310   70908 cri.go:89] found id: ""
	I0311 21:37:24.020337   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.020348   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:24.020354   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:24.020410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:24.059320   70908 cri.go:89] found id: ""
	I0311 21:37:24.059349   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.059359   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:24.059367   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:24.059424   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:24.096625   70908 cri.go:89] found id: ""
	I0311 21:37:24.096652   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.096660   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:24.096666   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:24.096723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:24.137068   70908 cri.go:89] found id: ""
	I0311 21:37:24.137100   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.137112   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:24.137121   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:24.137182   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:24.181298   70908 cri.go:89] found id: ""
	I0311 21:37:24.181325   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.181336   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:24.181348   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:24.181364   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:24.265423   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:24.265454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:24.318088   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:24.318113   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:24.374402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:24.374430   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:24.388934   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:24.388962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:24.475842   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:25.647164   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.650157   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.948984   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.949444   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:28.950697   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.030240   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:29.030848   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.976017   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:26.991533   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:26.991602   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:27.034750   70908 cri.go:89] found id: ""
	I0311 21:37:27.034769   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.034776   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:27.034781   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:27.034837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:27.073275   70908 cri.go:89] found id: ""
	I0311 21:37:27.073301   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.073309   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:27.073317   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:27.073363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:27.113396   70908 cri.go:89] found id: ""
	I0311 21:37:27.113418   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.113425   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:27.113431   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:27.113482   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:27.157442   70908 cri.go:89] found id: ""
	I0311 21:37:27.157465   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.157475   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:27.157482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:27.157534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:27.197277   70908 cri.go:89] found id: ""
	I0311 21:37:27.197302   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.197309   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:27.197315   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:27.197363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:27.237967   70908 cri.go:89] found id: ""
	I0311 21:37:27.237991   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.237999   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:27.238005   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:27.238077   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:27.280434   70908 cri.go:89] found id: ""
	I0311 21:37:27.280459   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.280467   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:27.280472   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:27.280535   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:27.334940   70908 cri.go:89] found id: ""
	I0311 21:37:27.334970   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.334982   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:27.334992   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:27.335010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:27.402535   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:27.402570   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:27.416758   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:27.416787   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:27.492762   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:27.492786   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:27.492803   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:27.576989   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:27.577032   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.124039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:30.138419   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:30.138483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:30.180900   70908 cri.go:89] found id: ""
	I0311 21:37:30.180926   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.180936   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:30.180944   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:30.180998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:30.222886   70908 cri.go:89] found id: ""
	I0311 21:37:30.222913   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.222921   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:30.222926   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:30.222976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:30.264332   70908 cri.go:89] found id: ""
	I0311 21:37:30.264357   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.264367   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:30.264376   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:30.264436   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:30.307084   70908 cri.go:89] found id: ""
	I0311 21:37:30.307112   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.307123   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:30.307130   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:30.307188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:30.345954   70908 cri.go:89] found id: ""
	I0311 21:37:30.345979   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.345990   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:30.345997   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:30.346057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:30.389408   70908 cri.go:89] found id: ""
	I0311 21:37:30.389439   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.389450   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:30.389457   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:30.389517   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:30.438380   70908 cri.go:89] found id: ""
	I0311 21:37:30.438410   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.438420   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:30.438427   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:30.438489   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:30.479860   70908 cri.go:89] found id: ""
	I0311 21:37:30.479884   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.479895   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:30.479906   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:30.479920   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:30.535831   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:30.535857   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:30.552702   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:30.552725   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:30.633417   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:30.633439   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:30.633454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:30.723106   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:30.723143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.147993   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:32.152839   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.450942   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.949947   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.528469   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.529721   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.270654   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:33.296640   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:33.296710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:33.366053   70908 cri.go:89] found id: ""
	I0311 21:37:33.366082   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.366093   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:33.366101   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:33.366161   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:33.421455   70908 cri.go:89] found id: ""
	I0311 21:37:33.421488   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.421501   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:33.421509   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:33.421583   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:33.464555   70908 cri.go:89] found id: ""
	I0311 21:37:33.464579   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.464586   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:33.464592   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:33.464647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:33.507044   70908 cri.go:89] found id: ""
	I0311 21:37:33.507086   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.507100   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:33.507110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:33.507175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:33.561446   70908 cri.go:89] found id: ""
	I0311 21:37:33.561518   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.561532   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:33.561540   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:33.561601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:33.604496   70908 cri.go:89] found id: ""
	I0311 21:37:33.604519   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.604528   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:33.604534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:33.604591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:33.645754   70908 cri.go:89] found id: ""
	I0311 21:37:33.645781   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.645791   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:33.645797   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:33.645869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:33.690041   70908 cri.go:89] found id: ""
	I0311 21:37:33.690071   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.690082   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:33.690092   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:33.690108   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:33.765708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:33.765737   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:33.765752   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:33.848869   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:33.848906   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:33.900191   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:33.900223   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:33.957101   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:33.957138   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:34.646831   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.647640   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.449429   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.948831   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.028141   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.028588   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.028676   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.474442   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:36.490159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:36.490231   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:36.537784   70908 cri.go:89] found id: ""
	I0311 21:37:36.537812   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.537822   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:36.537829   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:36.537885   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:36.581192   70908 cri.go:89] found id: ""
	I0311 21:37:36.581219   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.581230   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:36.581237   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:36.581297   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:36.620448   70908 cri.go:89] found id: ""
	I0311 21:37:36.620480   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.620492   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:36.620501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:36.620566   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:36.662135   70908 cri.go:89] found id: ""
	I0311 21:37:36.662182   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.662193   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:36.662203   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:36.662268   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:36.708138   70908 cri.go:89] found id: ""
	I0311 21:37:36.708178   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.708188   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:36.708198   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:36.708267   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:36.749668   70908 cri.go:89] found id: ""
	I0311 21:37:36.749697   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.749708   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:36.749717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:36.749783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:36.788455   70908 cri.go:89] found id: ""
	I0311 21:37:36.788476   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.788483   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:36.788488   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:36.788534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:36.830216   70908 cri.go:89] found id: ""
	I0311 21:37:36.830244   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.830257   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:36.830267   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:36.830285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:36.915306   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:36.915336   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:36.958861   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:36.958892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:37.014463   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:37.014489   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:37.029979   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:37.030010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:37.106840   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:39.607929   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:39.626247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:39.626307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:39.667409   70908 cri.go:89] found id: ""
	I0311 21:37:39.667436   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.667446   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:39.667454   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:39.667509   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:39.714167   70908 cri.go:89] found id: ""
	I0311 21:37:39.714198   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.714210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:39.714217   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:39.714275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:39.754759   70908 cri.go:89] found id: ""
	I0311 21:37:39.754787   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.754798   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:39.754805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:39.754865   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:39.794999   70908 cri.go:89] found id: ""
	I0311 21:37:39.795028   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.795038   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:39.795045   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:39.795108   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:39.836284   70908 cri.go:89] found id: ""
	I0311 21:37:39.836310   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.836321   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:39.836328   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:39.836386   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:39.876487   70908 cri.go:89] found id: ""
	I0311 21:37:39.876518   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.876530   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:39.876539   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:39.876601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:39.918750   70908 cri.go:89] found id: ""
	I0311 21:37:39.918785   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.918796   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:39.918813   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:39.918871   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:39.958486   70908 cri.go:89] found id: ""
	I0311 21:37:39.958517   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.958529   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:39.958537   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:39.958550   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:39.973899   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:39.973925   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:40.055954   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:40.055980   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:40.055995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:40.144801   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:40.144826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:40.189692   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:40.189722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:39.148581   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:41.647869   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:43.648550   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.949502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.951277   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.528844   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:44.529317   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.748909   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:42.763794   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:42.763877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:42.801470   70908 cri.go:89] found id: ""
	I0311 21:37:42.801493   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.801500   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:42.801506   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:42.801561   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:42.846267   70908 cri.go:89] found id: ""
	I0311 21:37:42.846294   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.846301   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:42.846307   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:42.846357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:42.890257   70908 cri.go:89] found id: ""
	I0311 21:37:42.890283   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.890294   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:42.890301   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:42.890357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:42.933605   70908 cri.go:89] found id: ""
	I0311 21:37:42.933628   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.933636   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:42.933643   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:42.933699   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:42.979020   70908 cri.go:89] found id: ""
	I0311 21:37:42.979043   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.979052   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:42.979059   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:42.979122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:43.021695   70908 cri.go:89] found id: ""
	I0311 21:37:43.021724   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.021734   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:43.021741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:43.021801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:43.064356   70908 cri.go:89] found id: ""
	I0311 21:37:43.064398   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.064406   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:43.064412   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:43.064457   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:43.101878   70908 cri.go:89] found id: ""
	I0311 21:37:43.101901   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.101909   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:43.101917   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:43.101930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:43.185836   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:43.185861   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:43.185874   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:43.268879   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:43.268912   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:43.319582   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:43.319614   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:43.374996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:43.375022   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:45.890408   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:45.905973   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:45.906041   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:45.951994   70908 cri.go:89] found id: ""
	I0311 21:37:45.952025   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.952040   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:45.952049   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:45.952112   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:45.992913   70908 cri.go:89] found id: ""
	I0311 21:37:45.992953   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.992964   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:45.992971   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:45.993034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:46.036306   70908 cri.go:89] found id: ""
	I0311 21:37:46.036334   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.036345   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:46.036353   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:46.036410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:46.147754   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:48.647534   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:45.450180   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:47.949568   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.532244   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.028905   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.077532   70908 cri.go:89] found id: ""
	I0311 21:37:46.077564   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.077576   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:46.077583   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:46.077633   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:46.115953   70908 cri.go:89] found id: ""
	I0311 21:37:46.115976   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.115983   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:46.115990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:46.116072   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:46.155665   70908 cri.go:89] found id: ""
	I0311 21:37:46.155699   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.155709   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:46.155717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:46.155775   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:46.197650   70908 cri.go:89] found id: ""
	I0311 21:37:46.197677   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.197696   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:46.197705   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:46.197766   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:46.243006   70908 cri.go:89] found id: ""
	I0311 21:37:46.243030   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.243037   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:46.243045   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:46.243058   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:46.294668   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:46.294696   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:46.308700   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:46.308721   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:46.387188   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:46.387207   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:46.387219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:46.480390   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:46.480423   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.027202   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:49.042292   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:49.042361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:49.081547   70908 cri.go:89] found id: ""
	I0311 21:37:49.081568   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.081579   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:49.081585   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:49.081632   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:49.127438   70908 cri.go:89] found id: ""
	I0311 21:37:49.127467   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.127477   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:49.127485   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:49.127545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:49.173992   70908 cri.go:89] found id: ""
	I0311 21:37:49.174024   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.174033   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:49.174042   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:49.174114   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:49.217087   70908 cri.go:89] found id: ""
	I0311 21:37:49.217120   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.217130   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:49.217138   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:49.217198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:49.255929   70908 cri.go:89] found id: ""
	I0311 21:37:49.255955   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.255970   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:49.255978   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:49.256037   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:49.296373   70908 cri.go:89] found id: ""
	I0311 21:37:49.296399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.296409   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:49.296417   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:49.296474   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:49.335063   70908 cri.go:89] found id: ""
	I0311 21:37:49.335092   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.335103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:49.335110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:49.335176   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:49.378374   70908 cri.go:89] found id: ""
	I0311 21:37:49.378399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.378406   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:49.378414   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:49.378427   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.422193   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:49.422220   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:49.474861   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:49.474893   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:49.490193   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:49.490219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:49.571857   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:49.571880   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:49.571895   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:51.149814   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.648033   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.949603   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.949943   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.951963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.531753   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:54.028723   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:52.168934   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:52.183086   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:52.183154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:52.221632   70908 cri.go:89] found id: ""
	I0311 21:37:52.221664   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.221675   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:52.221682   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:52.221743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:52.261550   70908 cri.go:89] found id: ""
	I0311 21:37:52.261575   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.261582   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:52.261588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:52.261638   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:52.302879   70908 cri.go:89] found id: ""
	I0311 21:37:52.302910   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.302920   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:52.302927   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:52.302987   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:52.346462   70908 cri.go:89] found id: ""
	I0311 21:37:52.346485   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.346494   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:52.346499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:52.346551   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:52.387949   70908 cri.go:89] found id: ""
	I0311 21:37:52.387977   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.387988   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:52.387995   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:52.388052   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:52.428527   70908 cri.go:89] found id: ""
	I0311 21:37:52.428564   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.428574   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:52.428582   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:52.428649   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:52.469516   70908 cri.go:89] found id: ""
	I0311 21:37:52.469548   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.469558   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:52.469565   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:52.469616   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:52.508371   70908 cri.go:89] found id: ""
	I0311 21:37:52.508407   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.508417   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:52.508429   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:52.508444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:52.587309   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:52.587346   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:52.587361   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:52.666419   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:52.666449   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:52.713150   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:52.713184   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:52.768011   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:52.768041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.284835   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:55.298742   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:55.298799   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:55.340215   70908 cri.go:89] found id: ""
	I0311 21:37:55.340240   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.340251   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:55.340257   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:55.340321   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:55.377930   70908 cri.go:89] found id: ""
	I0311 21:37:55.377956   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.377967   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:55.377974   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:55.378039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:55.418786   70908 cri.go:89] found id: ""
	I0311 21:37:55.418814   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.418822   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:55.418827   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:55.418883   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:55.461566   70908 cri.go:89] found id: ""
	I0311 21:37:55.461586   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.461593   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:55.461601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:55.461655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:55.502917   70908 cri.go:89] found id: ""
	I0311 21:37:55.502945   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.502955   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:55.502962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:55.503022   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:55.551417   70908 cri.go:89] found id: ""
	I0311 21:37:55.551441   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.551454   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:55.551462   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:55.551514   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:55.596060   70908 cri.go:89] found id: ""
	I0311 21:37:55.596092   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.596103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:55.596111   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:55.596172   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:55.635495   70908 cri.go:89] found id: ""
	I0311 21:37:55.635523   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.635535   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:55.635547   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:55.635564   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:55.691705   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:55.691735   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.707696   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:55.707718   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:55.780432   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:55.780452   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:55.780465   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:55.866033   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:55.866067   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:55.648873   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.452135   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.951150   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.528533   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.529769   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.437299   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:58.453058   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:58.453125   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:58.493317   70908 cri.go:89] found id: ""
	I0311 21:37:58.493339   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.493347   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:58.493353   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:58.493408   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:58.543533   70908 cri.go:89] found id: ""
	I0311 21:37:58.543556   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.543567   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:58.543578   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:58.543634   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:58.585255   70908 cri.go:89] found id: ""
	I0311 21:37:58.585282   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.585292   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:58.585300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:58.585359   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:58.622393   70908 cri.go:89] found id: ""
	I0311 21:37:58.622421   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.622428   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:58.622434   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:58.622501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:58.661939   70908 cri.go:89] found id: ""
	I0311 21:37:58.661963   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.661971   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:58.661977   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:58.662034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:58.703628   70908 cri.go:89] found id: ""
	I0311 21:37:58.703663   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.703674   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:58.703682   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:58.703743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:58.742553   70908 cri.go:89] found id: ""
	I0311 21:37:58.742583   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.742594   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:58.742601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:58.742662   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:58.785016   70908 cri.go:89] found id: ""
	I0311 21:37:58.785040   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.785047   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:58.785055   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:58.785071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:58.857757   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:58.857773   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:58.857786   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:58.946120   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:58.946148   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:58.996288   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:58.996328   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:59.055371   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:59.055407   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:00.651621   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.149663   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:00.951776   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.451012   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.028303   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.028600   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.032276   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.571092   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:01.591149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:01.591238   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:01.629156   70908 cri.go:89] found id: ""
	I0311 21:38:01.629184   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.629196   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:01.629203   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:01.629261   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:01.673656   70908 cri.go:89] found id: ""
	I0311 21:38:01.673680   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.673687   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:01.673692   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:01.673739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:01.713361   70908 cri.go:89] found id: ""
	I0311 21:38:01.713389   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.713397   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:01.713403   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:01.713450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:01.757256   70908 cri.go:89] found id: ""
	I0311 21:38:01.757286   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.757298   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:01.757305   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:01.757362   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:01.797538   70908 cri.go:89] found id: ""
	I0311 21:38:01.797565   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.797573   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:01.797580   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:01.797635   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:01.838664   70908 cri.go:89] found id: ""
	I0311 21:38:01.838692   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.838701   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:01.838707   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:01.838754   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:01.893638   70908 cri.go:89] found id: ""
	I0311 21:38:01.893668   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.893679   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:01.893686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:01.893747   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:01.935547   70908 cri.go:89] found id: ""
	I0311 21:38:01.935569   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.935577   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:01.935585   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:01.935596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:01.989964   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:01.989988   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:02.004949   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:02.004973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:02.082006   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:02.082024   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:02.082041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:02.171040   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:02.171072   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:04.724699   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:04.741445   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:04.741512   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:04.783924   70908 cri.go:89] found id: ""
	I0311 21:38:04.783951   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.783962   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:04.783969   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:04.784028   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:04.825806   70908 cri.go:89] found id: ""
	I0311 21:38:04.825835   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.825845   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:04.825852   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:04.825913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:04.864070   70908 cri.go:89] found id: ""
	I0311 21:38:04.864106   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.864118   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:04.864126   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:04.864181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:04.901735   70908 cri.go:89] found id: ""
	I0311 21:38:04.901759   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.901769   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:04.901777   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:04.901832   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:04.941473   70908 cri.go:89] found id: ""
	I0311 21:38:04.941496   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.941505   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:04.941513   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:04.941569   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:04.993132   70908 cri.go:89] found id: ""
	I0311 21:38:04.993162   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.993170   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:04.993178   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:04.993237   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:05.037925   70908 cri.go:89] found id: ""
	I0311 21:38:05.037950   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.037960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:05.037967   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:05.038026   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:05.080726   70908 cri.go:89] found id: ""
	I0311 21:38:05.080773   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.080784   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:05.080794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:05.080806   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:05.138205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:05.138233   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:05.155048   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:05.155071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:05.233067   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:05.233086   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:05.233099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:05.317897   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:05.317928   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:05.646661   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.647686   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.949900   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.950261   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.528049   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:09.530724   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.863484   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:07.877342   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:07.877411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:07.916352   70908 cri.go:89] found id: ""
	I0311 21:38:07.916374   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.916383   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:07.916391   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:07.916454   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:07.954833   70908 cri.go:89] found id: ""
	I0311 21:38:07.954854   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.954863   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:07.954870   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:07.954926   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:07.993124   70908 cri.go:89] found id: ""
	I0311 21:38:07.993152   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.993161   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:07.993168   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:07.993232   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:08.039081   70908 cri.go:89] found id: ""
	I0311 21:38:08.039108   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.039118   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:08.039125   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:08.039191   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:08.084627   70908 cri.go:89] found id: ""
	I0311 21:38:08.084650   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.084658   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:08.084665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:08.084712   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:08.125986   70908 cri.go:89] found id: ""
	I0311 21:38:08.126015   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.126026   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:08.126034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:08.126080   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:08.167149   70908 cri.go:89] found id: ""
	I0311 21:38:08.167176   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.167188   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:08.167193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:08.167252   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:08.204988   70908 cri.go:89] found id: ""
	I0311 21:38:08.205012   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.205020   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:08.205028   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:08.205043   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:08.295226   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:08.295268   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:08.357789   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:08.357820   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:08.434091   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:08.434132   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:08.455208   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:08.455240   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:08.529620   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.030060   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:09.648047   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.649628   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:13.652370   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:10.450139   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:12.949551   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.531354   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.029703   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.044303   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:11.046353   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:11.088067   70908 cri.go:89] found id: ""
	I0311 21:38:11.088099   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.088110   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:11.088117   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:11.088177   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:11.131077   70908 cri.go:89] found id: ""
	I0311 21:38:11.131104   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.131114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:11.131121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:11.131181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:11.172409   70908 cri.go:89] found id: ""
	I0311 21:38:11.172431   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.172439   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:11.172444   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:11.172496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:11.216775   70908 cri.go:89] found id: ""
	I0311 21:38:11.216817   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.216825   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:11.216830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:11.216886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:11.255105   70908 cri.go:89] found id: ""
	I0311 21:38:11.255129   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.255137   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:11.255142   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:11.255205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:11.292397   70908 cri.go:89] found id: ""
	I0311 21:38:11.292429   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.292440   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:11.292448   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:11.292518   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:11.330376   70908 cri.go:89] found id: ""
	I0311 21:38:11.330397   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.330408   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:11.330415   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:11.330476   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:11.367699   70908 cri.go:89] found id: ""
	I0311 21:38:11.367727   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.367737   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:11.367748   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:11.367763   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:11.421847   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:11.421876   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:11.437570   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:11.437593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:11.522084   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.522108   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:11.522123   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:11.606181   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:11.606228   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.153952   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:14.175726   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:14.175798   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:14.221752   70908 cri.go:89] found id: ""
	I0311 21:38:14.221784   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.221798   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:14.221807   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:14.221895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:14.286690   70908 cri.go:89] found id: ""
	I0311 21:38:14.286720   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.286740   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:14.286757   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:14.286824   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:14.343764   70908 cri.go:89] found id: ""
	I0311 21:38:14.343790   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.343799   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:14.343806   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:14.343876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:14.381198   70908 cri.go:89] found id: ""
	I0311 21:38:14.381220   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.381230   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:14.381237   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:14.381307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:14.421578   70908 cri.go:89] found id: ""
	I0311 21:38:14.421603   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.421613   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:14.421620   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:14.421678   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:14.462945   70908 cri.go:89] found id: ""
	I0311 21:38:14.462972   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.462982   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:14.462990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:14.463049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:14.503503   70908 cri.go:89] found id: ""
	I0311 21:38:14.503532   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.503543   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:14.503550   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:14.503610   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:14.543987   70908 cri.go:89] found id: ""
	I0311 21:38:14.544021   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.544034   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:14.544045   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:14.544062   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:14.624781   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:14.624804   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:14.624821   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:14.707130   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:14.707161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.750815   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:14.750848   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:14.806855   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:14.806882   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:16.149516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.646716   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.949827   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.953660   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.031935   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.529085   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:17.325267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:17.340421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:17.340483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:17.382808   70908 cri.go:89] found id: ""
	I0311 21:38:17.382831   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.382841   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:17.382849   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:17.382906   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:17.424838   70908 cri.go:89] found id: ""
	I0311 21:38:17.424865   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.424875   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:17.424883   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:17.424940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:17.466298   70908 cri.go:89] found id: ""
	I0311 21:38:17.466320   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.466327   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:17.466333   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:17.466397   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:17.506648   70908 cri.go:89] found id: ""
	I0311 21:38:17.506678   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.506685   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:17.506691   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:17.506739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:17.544019   70908 cri.go:89] found id: ""
	I0311 21:38:17.544048   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.544057   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:17.544067   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:17.544154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:17.583691   70908 cri.go:89] found id: ""
	I0311 21:38:17.583710   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.583717   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:17.583723   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:17.583768   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:17.624432   70908 cri.go:89] found id: ""
	I0311 21:38:17.624453   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.624460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:17.624466   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:17.624516   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:17.663253   70908 cri.go:89] found id: ""
	I0311 21:38:17.663294   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.663312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:17.663322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:17.663339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:17.749928   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:17.749962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:17.792817   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:17.792853   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:17.847391   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:17.847419   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:17.862813   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:17.862835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:17.935307   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.435995   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:20.452441   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:20.452510   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:20.491960   70908 cri.go:89] found id: ""
	I0311 21:38:20.491985   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.491992   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:20.491998   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:20.492045   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:20.531679   70908 cri.go:89] found id: ""
	I0311 21:38:20.531700   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.531707   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:20.531712   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:20.531764   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:20.571666   70908 cri.go:89] found id: ""
	I0311 21:38:20.571687   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.571694   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:20.571699   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:20.571762   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:20.611165   70908 cri.go:89] found id: ""
	I0311 21:38:20.611187   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.611194   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:20.611199   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:20.611248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:20.648680   70908 cri.go:89] found id: ""
	I0311 21:38:20.648709   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.648720   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:20.648728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:20.648801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:20.690177   70908 cri.go:89] found id: ""
	I0311 21:38:20.690204   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.690215   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:20.690222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:20.690298   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:20.728918   70908 cri.go:89] found id: ""
	I0311 21:38:20.728949   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.728960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:20.728968   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:20.729039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:20.773559   70908 cri.go:89] found id: ""
	I0311 21:38:20.773586   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.773596   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:20.773607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:20.773623   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:20.788709   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:20.788750   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:20.869832   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.869856   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:20.869868   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:20.963515   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:20.963544   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:21.007029   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:21.007055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:21.147703   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.660410   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:19.449416   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:21.451194   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.950401   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:20.529497   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:22.529947   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:25.030431   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.566134   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:23.583855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:23.583911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:23.623605   70908 cri.go:89] found id: ""
	I0311 21:38:23.623633   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.623656   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:23.623664   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:23.623719   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:23.663058   70908 cri.go:89] found id: ""
	I0311 21:38:23.663081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.663091   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:23.663098   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:23.663157   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:23.701930   70908 cri.go:89] found id: ""
	I0311 21:38:23.701963   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.701975   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:23.701985   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:23.702049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:23.743925   70908 cri.go:89] found id: ""
	I0311 21:38:23.743955   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.743964   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:23.743970   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:23.744046   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:23.784030   70908 cri.go:89] found id: ""
	I0311 21:38:23.784055   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.784066   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:23.784073   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:23.784132   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:23.823054   70908 cri.go:89] found id: ""
	I0311 21:38:23.823081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.823089   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:23.823097   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:23.823156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:23.863629   70908 cri.go:89] found id: ""
	I0311 21:38:23.863654   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.863662   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:23.863668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:23.863724   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:23.904429   70908 cri.go:89] found id: ""
	I0311 21:38:23.904454   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.904462   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:23.904470   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:23.904481   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:23.962356   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:23.962393   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:23.977667   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:23.977689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:24.068791   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:24.068820   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:24.068835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:24.157857   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:24.157892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:26.147447   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.148069   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.450243   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.950495   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:27.530194   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:30.029286   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.705872   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:26.720840   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:26.720936   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:26.766449   70908 cri.go:89] found id: ""
	I0311 21:38:26.766480   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.766490   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:26.766496   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:26.766557   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:26.806179   70908 cri.go:89] found id: ""
	I0311 21:38:26.806203   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.806210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:26.806216   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:26.806275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:26.850737   70908 cri.go:89] found id: ""
	I0311 21:38:26.850765   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.850775   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:26.850785   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:26.850845   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:26.897694   70908 cri.go:89] found id: ""
	I0311 21:38:26.897722   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.897733   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:26.897744   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:26.897802   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:26.940940   70908 cri.go:89] found id: ""
	I0311 21:38:26.940962   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.940969   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:26.940975   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:26.941021   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:26.978576   70908 cri.go:89] found id: ""
	I0311 21:38:26.978604   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.978614   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:26.978625   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:26.978682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:27.016331   70908 cri.go:89] found id: ""
	I0311 21:38:27.016363   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.016374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:27.016381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:27.016439   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:27.061541   70908 cri.go:89] found id: ""
	I0311 21:38:27.061569   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.061580   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:27.061590   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:27.061609   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:27.154977   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:27.155017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:27.204458   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:27.204488   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:27.259960   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:27.259997   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:27.277806   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:27.277832   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:27.356111   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:29.856828   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:29.871331   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:29.871413   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:29.912867   70908 cri.go:89] found id: ""
	I0311 21:38:29.912895   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.912904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:29.912910   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:29.912973   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:29.953458   70908 cri.go:89] found id: ""
	I0311 21:38:29.953483   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.953491   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:29.953497   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:29.953553   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:29.997873   70908 cri.go:89] found id: ""
	I0311 21:38:29.997904   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.997912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:29.997921   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:29.997983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:30.038831   70908 cri.go:89] found id: ""
	I0311 21:38:30.038861   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.038872   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:30.038880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:30.038940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:30.082089   70908 cri.go:89] found id: ""
	I0311 21:38:30.082117   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.082127   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:30.082135   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:30.082213   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:30.121167   70908 cri.go:89] found id: ""
	I0311 21:38:30.121198   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.121209   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:30.121216   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:30.121274   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:30.162342   70908 cri.go:89] found id: ""
	I0311 21:38:30.162371   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.162380   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:30.162393   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:30.162452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:30.201727   70908 cri.go:89] found id: ""
	I0311 21:38:30.201753   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.201761   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:30.201769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:30.201780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:30.283314   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:30.283346   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:30.333900   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:30.333930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:30.391761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:30.391798   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:30.407907   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:30.407930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:30.489560   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:30.646773   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.649048   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:31.456251   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:33.951315   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.529160   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:34.530183   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.989976   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:33.004724   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:33.004814   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:33.049701   70908 cri.go:89] found id: ""
	I0311 21:38:33.049733   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.049743   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:33.049753   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:33.049823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:33.097759   70908 cri.go:89] found id: ""
	I0311 21:38:33.097792   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.097804   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:33.097811   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:33.097875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:33.143257   70908 cri.go:89] found id: ""
	I0311 21:38:33.143291   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.143300   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:33.143308   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:33.143376   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:33.187434   70908 cri.go:89] found id: ""
	I0311 21:38:33.187464   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.187477   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:33.187483   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:33.187558   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:33.236201   70908 cri.go:89] found id: ""
	I0311 21:38:33.236230   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.236239   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:33.236245   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:33.236312   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:33.279710   70908 cri.go:89] found id: ""
	I0311 21:38:33.279783   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.279816   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:33.279830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:33.279898   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:33.325022   70908 cri.go:89] found id: ""
	I0311 21:38:33.325053   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.325064   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:33.325072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:33.325138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:33.368588   70908 cri.go:89] found id: ""
	I0311 21:38:33.368614   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.368622   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:33.368629   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:33.368640   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:33.427761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:33.427801   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:33.444440   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:33.444472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:33.527745   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:33.527764   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:33.527775   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:33.608215   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:33.608248   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:35.146541   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:37.146917   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.450175   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:38.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.531125   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:39.028780   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.158253   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:36.172370   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:36.172438   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:36.216905   70908 cri.go:89] found id: ""
	I0311 21:38:36.216935   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.216945   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:36.216951   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:36.216996   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:36.260844   70908 cri.go:89] found id: ""
	I0311 21:38:36.260875   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.260885   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:36.260890   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:36.260941   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:36.306730   70908 cri.go:89] found id: ""
	I0311 21:38:36.306755   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.306767   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:36.306772   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:36.306820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:36.346957   70908 cri.go:89] found id: ""
	I0311 21:38:36.346993   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.347004   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:36.347012   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:36.347082   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:36.392265   70908 cri.go:89] found id: ""
	I0311 21:38:36.392295   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.392306   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:36.392313   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:36.392379   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:36.433383   70908 cri.go:89] found id: ""
	I0311 21:38:36.433407   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.433414   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:36.433421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:36.433467   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:36.471291   70908 cri.go:89] found id: ""
	I0311 21:38:36.471325   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.471336   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:36.471344   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:36.471411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:36.514662   70908 cri.go:89] found id: ""
	I0311 21:38:36.514688   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.514698   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:36.514708   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:36.514722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:36.533222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:36.533251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:36.616359   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:36.616384   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:36.616400   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:36.719105   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:36.719137   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:36.771125   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:36.771156   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.324847   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:39.341149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:39.341218   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:39.380284   70908 cri.go:89] found id: ""
	I0311 21:38:39.380324   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.380335   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:39.380343   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:39.380407   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:39.429860   70908 cri.go:89] found id: ""
	I0311 21:38:39.429886   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.429894   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:39.429899   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:39.429960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:39.468089   70908 cri.go:89] found id: ""
	I0311 21:38:39.468113   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.468121   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:39.468127   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:39.468188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:39.508589   70908 cri.go:89] found id: ""
	I0311 21:38:39.508617   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.508628   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:39.508636   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:39.508695   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:39.552427   70908 cri.go:89] found id: ""
	I0311 21:38:39.552451   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.552459   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:39.552464   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:39.552511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:39.592586   70908 cri.go:89] found id: ""
	I0311 21:38:39.592607   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.592615   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:39.592621   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:39.592670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:39.637138   70908 cri.go:89] found id: ""
	I0311 21:38:39.637167   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.637178   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:39.637186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:39.637248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:39.679422   70908 cri.go:89] found id: ""
	I0311 21:38:39.679457   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.679470   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:39.679482   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:39.679499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.734815   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:39.734850   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:39.750448   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:39.750472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:39.832912   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:39.832936   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:39.832951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:39.924020   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:39.924061   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:39.648759   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.146226   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:40.950021   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.951344   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:41.528407   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529130   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529166   70458 pod_ready.go:81] duration metric: took 4m0.007627735s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:43.529179   70458 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0311 21:38:43.529188   70458 pod_ready.go:38] duration metric: took 4m4.551429192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:43.529207   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:38:43.529242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:43.529306   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:43.589292   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:43.589314   70458 cri.go:89] found id: ""
	I0311 21:38:43.589323   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:43.589388   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.595182   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:43.595267   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:43.645002   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:43.645027   70458 cri.go:89] found id: ""
	I0311 21:38:43.645036   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:43.645088   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.650463   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:43.650537   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:43.693876   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:43.693894   70458 cri.go:89] found id: ""
	I0311 21:38:43.693902   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:43.693958   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.699273   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:43.699340   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:43.752552   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:43.752585   70458 cri.go:89] found id: ""
	I0311 21:38:43.752596   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:43.752667   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.758307   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:43.758384   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:43.802761   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:43.802789   70458 cri.go:89] found id: ""
	I0311 21:38:43.802798   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:43.802858   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.807796   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:43.807867   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:43.853820   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:43.853843   70458 cri.go:89] found id: ""
	I0311 21:38:43.853851   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:43.853907   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.859377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:43.859451   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:43.910605   70458 cri.go:89] found id: ""
	I0311 21:38:43.910640   70458 logs.go:276] 0 containers: []
	W0311 21:38:43.910648   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:43.910655   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:43.910702   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:43.955602   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:43.955624   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:43.955629   70458 cri.go:89] found id: ""
	I0311 21:38:43.955645   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:43.955713   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.960856   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.965889   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:43.965919   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:44.013879   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:44.013908   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:44.064641   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:44.064669   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:44.118095   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:44.118120   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:44.177775   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:44.177819   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.242090   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:44.242129   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:44.261628   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:44.261665   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:44.322616   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:44.322656   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:44.388117   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:44.388159   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:44.445980   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:44.446018   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:44.980199   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:44.980243   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:45.138312   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:45.138368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:45.208626   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:45.208664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:42.472932   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:42.488034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:42.488090   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:42.530945   70908 cri.go:89] found id: ""
	I0311 21:38:42.530971   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.530981   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:42.530989   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:42.531053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:42.571906   70908 cri.go:89] found id: ""
	I0311 21:38:42.571939   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.571951   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:42.571960   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:42.572029   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:42.613198   70908 cri.go:89] found id: ""
	I0311 21:38:42.613228   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.613239   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:42.613247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:42.613330   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:42.654740   70908 cri.go:89] found id: ""
	I0311 21:38:42.654762   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.654770   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:42.654775   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:42.654821   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:42.694797   70908 cri.go:89] found id: ""
	I0311 21:38:42.694836   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.694847   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:42.694854   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:42.694931   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:42.738918   70908 cri.go:89] found id: ""
	I0311 21:38:42.738946   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.738958   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:42.738965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:42.739032   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:42.780836   70908 cri.go:89] found id: ""
	I0311 21:38:42.780870   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.780881   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:42.780888   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:42.780943   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:42.824672   70908 cri.go:89] found id: ""
	I0311 21:38:42.824701   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.824712   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:42.824721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:42.824747   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:42.877219   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:42.877253   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:42.934996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:42.935033   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:42.952125   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:42.952152   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:43.036657   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:43.036678   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:43.036695   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:45.629959   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:45.648501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:45.648581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:45.690083   70908 cri.go:89] found id: ""
	I0311 21:38:45.690117   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.690128   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:45.690136   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:45.690201   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:45.736497   70908 cri.go:89] found id: ""
	I0311 21:38:45.736519   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.736526   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:45.736531   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:45.736576   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:45.778590   70908 cri.go:89] found id: ""
	I0311 21:38:45.778625   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.778636   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:45.778645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:45.778723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:45.822322   70908 cri.go:89] found id: ""
	I0311 21:38:45.822351   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.822359   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:45.822365   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:45.822419   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:45.868591   70908 cri.go:89] found id: ""
	I0311 21:38:45.868618   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.868627   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:45.868633   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:45.868680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:45.915137   70908 cri.go:89] found id: ""
	I0311 21:38:45.915165   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.915178   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:45.915187   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:45.915258   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:45.960432   70908 cri.go:89] found id: ""
	I0311 21:38:45.960459   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.960469   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:45.960476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:45.960529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:46.006089   70908 cri.go:89] found id: ""
	I0311 21:38:46.006168   70908 logs.go:276] 0 containers: []
	W0311 21:38:46.006185   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:46.006195   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:46.006209   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.153091   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.650654   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:44.951550   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.952791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:47.756629   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:47.776613   70458 api_server.go:72] duration metric: took 4m14.182101385s to wait for apiserver process to appear ...
	I0311 21:38:47.776651   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:38:47.776691   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:47.776774   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:47.826534   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:47.826553   70458 cri.go:89] found id: ""
	I0311 21:38:47.826560   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:47.826609   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.831565   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:47.831637   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:47.876504   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:47.876531   70458 cri.go:89] found id: ""
	I0311 21:38:47.876541   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:47.876598   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.882130   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:47.882224   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:47.930064   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:47.930087   70458 cri.go:89] found id: ""
	I0311 21:38:47.930096   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:47.930139   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.935357   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:47.935433   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:47.989169   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:47.989196   70458 cri.go:89] found id: ""
	I0311 21:38:47.989206   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:47.989262   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.994341   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:47.994401   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:48.037592   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:48.037619   70458 cri.go:89] found id: ""
	I0311 21:38:48.037629   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:48.037692   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.043377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:48.043453   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:48.088629   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.088651   70458 cri.go:89] found id: ""
	I0311 21:38:48.088671   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:48.088722   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.093944   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:48.094016   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:48.144943   70458 cri.go:89] found id: ""
	I0311 21:38:48.144971   70458 logs.go:276] 0 containers: []
	W0311 21:38:48.144983   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:48.144990   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:48.145050   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:48.188857   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:48.188877   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.188881   70458 cri.go:89] found id: ""
	I0311 21:38:48.188887   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:48.188934   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.195123   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.200643   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:48.200673   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.246864   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:48.246894   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:48.715510   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:48.715545   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:48.775676   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:48.775716   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:48.793121   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:48.793157   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:48.863992   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:48.864040   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:48.922775   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:48.922810   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.996820   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:48.996866   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:49.045065   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.045097   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:49.199072   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:49.199137   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:49.283329   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:49.283360   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:49.340461   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:49.340502   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:49.391436   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.391460   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:46.064257   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:46.064296   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:46.080304   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:46.080337   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:46.177978   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:46.178001   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:46.178017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:46.265260   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:46.265298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:48.814221   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:48.835695   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:48.835793   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:48.898391   70908 cri.go:89] found id: ""
	I0311 21:38:48.898418   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.898429   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:48.898437   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:48.898501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:48.972552   70908 cri.go:89] found id: ""
	I0311 21:38:48.972596   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.972607   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:48.972617   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:48.972684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:49.022346   70908 cri.go:89] found id: ""
	I0311 21:38:49.022371   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.022379   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:49.022384   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:49.022430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:49.078415   70908 cri.go:89] found id: ""
	I0311 21:38:49.078444   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.078455   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:49.078463   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:49.078526   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:49.119369   70908 cri.go:89] found id: ""
	I0311 21:38:49.119402   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.119412   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:49.119420   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:49.119497   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:49.169866   70908 cri.go:89] found id: ""
	I0311 21:38:49.169897   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.169908   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:49.169916   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:49.169978   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:49.223619   70908 cri.go:89] found id: ""
	I0311 21:38:49.223642   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.223650   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:49.223656   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:49.223704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:49.278499   70908 cri.go:89] found id: ""
	I0311 21:38:49.278531   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.278542   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:49.278551   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:49.278563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:49.294734   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.294760   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:49.390223   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:49.390252   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:49.390267   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:49.481214   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.481250   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:49.530285   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:49.530321   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:49.149825   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.648269   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.140832   70604 pod_ready.go:81] duration metric: took 4m0.000856291s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:53.140873   70604 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:38:53.140895   70604 pod_ready.go:38] duration metric: took 4m13.032115697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:53.140925   70604 kubeadm.go:591] duration metric: took 4m21.406945055s to restartPrimaryControlPlane
	W0311 21:38:53.140993   70604 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:53.141028   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:49.450738   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.950491   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.952209   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.955522   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:38:51.961814   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:38:51.963188   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:38:51.963209   70458 api_server.go:131] duration metric: took 4.186550701s to wait for apiserver health ...
	I0311 21:38:51.963218   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:38:51.963242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:51.963294   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.020708   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.020727   70458 cri.go:89] found id: ""
	I0311 21:38:52.020746   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:52.020815   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.026606   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.026668   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.072045   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.072063   70458 cri.go:89] found id: ""
	I0311 21:38:52.072071   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:52.072130   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.078592   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.078771   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.139445   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:52.139480   70458 cri.go:89] found id: ""
	I0311 21:38:52.139490   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:52.139548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.148641   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.148724   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.199332   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.199360   70458 cri.go:89] found id: ""
	I0311 21:38:52.199371   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:52.199433   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.207033   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.207096   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.267514   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.267540   70458 cri.go:89] found id: ""
	I0311 21:38:52.267549   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:52.267615   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.274048   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.274132   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.330293   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:52.330324   70458 cri.go:89] found id: ""
	I0311 21:38:52.330334   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:52.330395   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.336062   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.336143   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.381909   70458 cri.go:89] found id: ""
	I0311 21:38:52.381941   70458 logs.go:276] 0 containers: []
	W0311 21:38:52.381952   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.381960   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:52.382026   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:52.441879   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.441908   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.441919   70458 cri.go:89] found id: ""
	I0311 21:38:52.441928   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:52.441988   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.449288   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.456632   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.456664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.526327   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.526368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.545008   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.545035   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:52.699959   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:52.699995   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.762045   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:52.762079   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.828963   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:52.829005   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.874202   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:52.874237   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.916842   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:52.916872   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.969778   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.969807   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:53.365097   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:53.365147   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:53.446533   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:53.446576   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:53.500017   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:53.500043   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:53.572904   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:53.572954   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.087848   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:52.108284   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:52.108351   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.161648   70908 cri.go:89] found id: ""
	I0311 21:38:52.161680   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.161691   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:52.161698   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.161763   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.206552   70908 cri.go:89] found id: ""
	I0311 21:38:52.206577   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.206588   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:52.206596   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.206659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.253954   70908 cri.go:89] found id: ""
	I0311 21:38:52.253984   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.253996   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:52.254004   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.254068   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.302343   70908 cri.go:89] found id: ""
	I0311 21:38:52.302384   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.302396   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:52.302404   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.302472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.345581   70908 cri.go:89] found id: ""
	I0311 21:38:52.345608   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.345618   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:52.345624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.345683   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.392502   70908 cri.go:89] found id: ""
	I0311 21:38:52.392531   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.392542   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:52.392549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.392601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.447625   70908 cri.go:89] found id: ""
	I0311 21:38:52.447651   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.447661   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.447668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:52.447728   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:52.490965   70908 cri.go:89] found id: ""
	I0311 21:38:52.490994   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.491007   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:52.491019   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:52.491034   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:52.539604   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.539650   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.597735   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.597771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.617572   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.617610   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:52.706724   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:52.706753   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.706769   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.293550   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:55.313904   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:55.314005   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:55.368607   70908 cri.go:89] found id: ""
	I0311 21:38:55.368639   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.368647   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:55.368654   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:55.368714   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:55.434052   70908 cri.go:89] found id: ""
	I0311 21:38:55.434081   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.434092   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:55.434100   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:55.434189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:55.483532   70908 cri.go:89] found id: ""
	I0311 21:38:55.483562   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.483572   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:55.483579   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:55.483647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:55.528681   70908 cri.go:89] found id: ""
	I0311 21:38:55.528708   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.528721   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:55.528728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:55.528825   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:55.583143   70908 cri.go:89] found id: ""
	I0311 21:38:55.583167   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.583174   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:55.583179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:55.583240   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:55.636577   70908 cri.go:89] found id: ""
	I0311 21:38:55.636599   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.636607   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:55.636612   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:55.636670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:55.697268   70908 cri.go:89] found id: ""
	I0311 21:38:55.697295   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.697306   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:55.697314   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:55.697374   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:55.749272   70908 cri.go:89] found id: ""
	I0311 21:38:55.749302   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.749312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:55.749322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:55.749335   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.841581   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:55.841643   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:55.898537   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:55.898574   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:55.973278   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:55.973329   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:55.992958   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:55.992986   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:56.137313   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:38:56.137347   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.137354   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.137361   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.137366   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.137371   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.137375   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.137383   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.137390   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.137400   70458 system_pods.go:74] duration metric: took 4.174175629s to wait for pod list to return data ...
	I0311 21:38:56.137409   70458 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:38:56.140315   70458 default_sa.go:45] found service account: "default"
	I0311 21:38:56.140344   70458 default_sa.go:55] duration metric: took 2.92722ms for default service account to be created ...
	I0311 21:38:56.140356   70458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:38:56.146873   70458 system_pods.go:86] 8 kube-system pods found
	I0311 21:38:56.146912   70458 system_pods.go:89] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.146923   70458 system_pods.go:89] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.146932   70458 system_pods.go:89] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.146940   70458 system_pods.go:89] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.146945   70458 system_pods.go:89] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.146951   70458 system_pods.go:89] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.146960   70458 system_pods.go:89] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.146972   70458 system_pods.go:89] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.146983   70458 system_pods.go:126] duration metric: took 6.619737ms to wait for k8s-apps to be running ...
	I0311 21:38:56.146998   70458 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:38:56.147056   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:56.165354   70458 system_svc.go:56] duration metric: took 18.346754ms WaitForService to wait for kubelet
	I0311 21:38:56.165387   70458 kubeadm.go:576] duration metric: took 4m22.570894549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:38:56.165413   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:38:56.168819   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:38:56.168845   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:38:56.168856   70458 node_conditions.go:105] duration metric: took 3.437527ms to run NodePressure ...
	I0311 21:38:56.168868   70458 start.go:240] waiting for startup goroutines ...
	I0311 21:38:56.168875   70458 start.go:245] waiting for cluster config update ...
	I0311 21:38:56.168885   70458 start.go:254] writing updated cluster config ...
	I0311 21:38:56.169153   70458 ssh_runner.go:195] Run: rm -f paused
	I0311 21:38:56.225977   70458 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:38:56.228234   70458 out.go:177] * Done! kubectl is now configured to use "no-preload-324578" cluster and "default" namespace by default
	I0311 21:38:56.450729   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:58.450799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	W0311 21:38:56.084193   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:58.584354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:58.604767   70908 kubeadm.go:591] duration metric: took 4m4.440744932s to restartPrimaryControlPlane
	W0311 21:38:58.604844   70908 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:58.604872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:59.965834   70908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.36094005s)
	I0311 21:38:59.965906   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:59.982020   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:38:59.994794   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:00.007116   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:00.007138   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:00.007182   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:00.019744   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:00.019802   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:00.033311   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:00.045608   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:00.045685   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:00.059722   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.071140   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:00.071199   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.082635   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:00.093311   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:00.093374   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:00.104995   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:00.372164   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:00.950799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:03.450080   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:05.949899   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:07.950640   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:10.450583   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:12.949481   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:14.950496   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:16.951064   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:18.958165   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:21.450609   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:23.949791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:26.302837   70604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.161781704s)
	I0311 21:39:26.302921   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:26.319602   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:39:26.331483   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:26.343632   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:26.343658   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:26.343705   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:26.354863   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:26.354919   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:26.366087   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:26.377221   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:26.377282   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:26.389769   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.401201   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:26.401255   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.412357   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:26.423962   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:26.424035   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:26.436189   70604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:26.672030   70604 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:25.952857   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:28.449272   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:30.450630   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:32.450912   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:35.908605   70604 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:39:35.908656   70604 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:39:35.908751   70604 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:39:35.908846   70604 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:39:35.908967   70604 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:39:35.909026   70604 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:39:35.910690   70604 out.go:204]   - Generating certificates and keys ...
	I0311 21:39:35.910785   70604 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:39:35.910849   70604 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:39:35.910952   70604 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:39:35.911039   70604 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:39:35.911106   70604 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:39:35.911177   70604 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:39:35.911268   70604 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:39:35.911353   70604 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:39:35.911449   70604 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:39:35.911551   70604 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:39:35.911604   70604 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:39:35.911689   70604 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:39:35.911762   70604 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:39:35.911869   70604 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:39:35.911974   70604 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:39:35.912067   70604 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:39:35.912217   70604 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:39:35.912320   70604 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:39:35.914908   70604 out.go:204]   - Booting up control plane ...
	I0311 21:39:35.915026   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:39:35.915126   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:39:35.915216   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:39:35.915321   70604 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:39:35.915431   70604 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:39:35.915487   70604 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:39:35.915659   70604 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:39:35.915792   70604 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503325 seconds
	I0311 21:39:35.915925   70604 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:39:35.916039   70604 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:39:35.916091   70604 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:39:35.916314   70604 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-743937 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:39:35.916408   70604 kubeadm.go:309] [bootstrap-token] Using token: hxeoeg.f2scq51qa57vwzwt
	I0311 21:39:35.917880   70604 out.go:204]   - Configuring RBAC rules ...
	I0311 21:39:35.917995   70604 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:39:35.918093   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:39:35.918297   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:39:35.918490   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:39:35.918629   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:39:35.918745   70604 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:39:35.918907   70604 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:39:35.918974   70604 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:39:35.919031   70604 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:39:35.919048   70604 kubeadm.go:309] 
	I0311 21:39:35.919118   70604 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:39:35.919128   70604 kubeadm.go:309] 
	I0311 21:39:35.919225   70604 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:39:35.919236   70604 kubeadm.go:309] 
	I0311 21:39:35.919266   70604 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:39:35.919344   70604 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:39:35.919405   70604 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:39:35.919412   70604 kubeadm.go:309] 
	I0311 21:39:35.919461   70604 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:39:35.919467   70604 kubeadm.go:309] 
	I0311 21:39:35.919505   70604 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:39:35.919511   70604 kubeadm.go:309] 
	I0311 21:39:35.919553   70604 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:39:35.919640   70604 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:39:35.919727   70604 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:39:35.919736   70604 kubeadm.go:309] 
	I0311 21:39:35.919835   70604 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:39:35.919949   70604 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:39:35.919964   70604 kubeadm.go:309] 
	I0311 21:39:35.920071   70604 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920172   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:39:35.920193   70604 kubeadm.go:309] 	--control-plane 
	I0311 21:39:35.920199   70604 kubeadm.go:309] 
	I0311 21:39:35.920271   70604 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:39:35.920280   70604 kubeadm.go:309] 
	I0311 21:39:35.920349   70604 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920479   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:39:35.920507   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:39:35.920517   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:39:35.922125   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:39:35.923386   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:39:35.955828   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:39:36.065309   70604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:39:36.065389   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.065408   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-743937 minikube.k8s.io/updated_at=2024_03_11T21_39_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=embed-certs-743937 minikube.k8s.io/primary=true
	I0311 21:39:36.370945   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.370961   70604 ops.go:34] apiserver oom_adj: -16
	I0311 21:39:36.871194   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.371937   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.871974   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.371330   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.871791   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:34.949300   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:36.942990   70417 pod_ready.go:81] duration metric: took 4m0.000574155s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	E0311 21:39:36.943022   70417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:39:36.943043   70417 pod_ready.go:38] duration metric: took 4m12.043798271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:36.943093   70417 kubeadm.go:591] duration metric: took 4m20.121624644s to restartPrimaryControlPlane
	W0311 21:39:36.943155   70417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:39:36.943183   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:39:39.371531   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:39.872032   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.371717   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.871615   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.371577   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.871841   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.371050   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.871044   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.371446   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.871815   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.371243   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.872056   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.371993   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.871213   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.371397   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.871185   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.371541   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.871121   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.971855   70604 kubeadm.go:1106] duration metric: took 11.906533451s to wait for elevateKubeSystemPrivileges
	W0311 21:39:47.971895   70604 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:39:47.971902   70604 kubeadm.go:393] duration metric: took 5m16.305518086s to StartCluster
	I0311 21:39:47.971917   70604 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.972003   70604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:39:47.974339   70604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.974576   70604 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:39:47.976309   70604 out.go:177] * Verifying Kubernetes components...
	I0311 21:39:47.974638   70604 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:39:47.974819   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:39:47.977737   70604 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-743937"
	I0311 21:39:47.977746   70604 addons.go:69] Setting default-storageclass=true in profile "embed-certs-743937"
	I0311 21:39:47.977779   70604 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-743937"
	W0311 21:39:47.977790   70604 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:39:47.977815   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.977740   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:39:47.977779   70604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-743937"
	I0311 21:39:47.977750   70604 addons.go:69] Setting metrics-server=true in profile "embed-certs-743937"
	I0311 21:39:47.977943   70604 addons.go:234] Setting addon metrics-server=true in "embed-certs-743937"
	W0311 21:39:47.977957   70604 addons.go:243] addon metrics-server should already be in state true
	I0311 21:39:47.977985   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978270   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978275   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978419   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978449   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.994019   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I0311 21:39:47.994131   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0311 21:39:47.994484   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994514   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994964   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.994983   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995128   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.995143   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995288   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0311 21:39:47.995437   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995506   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995583   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:47.996051   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.996073   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.996516   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.996999   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.997024   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.997383   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.997834   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.997858   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.999381   70604 addons.go:234] Setting addon default-storageclass=true in "embed-certs-743937"
	W0311 21:39:47.999406   70604 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:39:47.999432   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.999794   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.999823   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.012063   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0311 21:39:48.012470   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.012899   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.012923   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.013267   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.013334   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0311 21:39:48.013484   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.013767   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.014259   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.014279   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.014556   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.014752   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.015486   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.017650   70604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:39:48.016591   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.019717   70604 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.019736   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:39:48.019758   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.021823   70604 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:39:48.023083   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:39:48.023095   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:39:48.023108   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.023306   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.023589   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0311 21:39:48.023916   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.023937   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.024255   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.024412   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.024533   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.024653   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.025517   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.025955   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.025967   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.026292   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.027365   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.027654   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:48.027692   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.027909   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.027965   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.028188   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.028369   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.028496   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.028603   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.048933   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0311 21:39:48.049338   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.049918   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.049929   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.050342   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.050502   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.052274   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.052523   70604 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.052537   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:39:48.052554   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.055438   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.055864   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.055881   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.056156   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.056334   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.056495   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.056608   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.175402   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:39:48.196199   70604 node_ready.go:35] waiting up to 6m0s for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215911   70604 node_ready.go:49] node "embed-certs-743937" has status "Ready":"True"
	I0311 21:39:48.215935   70604 node_ready.go:38] duration metric: took 19.701474ms for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215945   70604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.223525   70604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228887   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.228907   70604 pod_ready.go:81] duration metric: took 5.35597ms for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228917   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233811   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.233828   70604 pod_ready.go:81] duration metric: took 4.904721ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233839   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241831   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.241848   70604 pod_ready.go:81] duration metric: took 8.002663ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241857   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247609   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.247633   70604 pod_ready.go:81] duration metric: took 5.767693ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247641   70604 pod_ready.go:38] duration metric: took 31.680305ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.247656   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:39:48.247704   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:39:48.270201   70604 api_server.go:72] duration metric: took 295.596568ms to wait for apiserver process to appear ...
	I0311 21:39:48.270224   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:39:48.270242   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:39:48.277642   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:39:48.280487   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:39:48.280505   70604 api_server.go:131] duration metric: took 10.273204ms to wait for apiserver health ...
	I0311 21:39:48.280514   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:39:48.343718   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.346848   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:39:48.346864   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:39:48.400878   70604 system_pods.go:59] 4 kube-system pods found
	I0311 21:39:48.400907   70604 system_pods.go:61] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.400913   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.400919   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.400923   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.400931   70604 system_pods.go:74] duration metric: took 120.410888ms to wait for pod list to return data ...
	I0311 21:39:48.400940   70604 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:39:48.401062   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:39:48.401083   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:39:48.406115   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.492018   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.492042   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:39:48.581187   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.602016   70604 default_sa.go:45] found service account: "default"
	I0311 21:39:48.602046   70604 default_sa.go:55] duration metric: took 201.097662ms for default service account to be created ...
	I0311 21:39:48.602056   70604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:39:48.862115   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:48.862148   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending
	I0311 21:39:48.862155   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending
	I0311 21:39:48.862159   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.862164   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.862169   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.862176   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:48.862180   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.862199   70604 retry.go:31] will retry after 266.08114ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.139648   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.139675   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139682   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139689   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.139694   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.139700   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.139706   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.139710   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.139724   70604 retry.go:31] will retry after 293.420416ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.476384   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.476411   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476418   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476423   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.476429   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.476433   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.476438   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.476442   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.476456   70604 retry.go:31] will retry after 439.10065ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.927298   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.927337   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927348   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927357   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.927366   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.927373   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.927381   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.927389   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.927411   70604 retry.go:31] will retry after 396.604462ms: missing components: kube-dns, kube-proxy
	I0311 21:39:50.092631   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68647s)
	I0311 21:39:50.092698   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.092718   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093147   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093200   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093223   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093241   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093280   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.749522465s)
	I0311 21:39:50.093321   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093336   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093507   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093529   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093746   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.093759   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093773   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093797   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093805   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.094040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.094041   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.094067   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.111807   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.111831   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.112109   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.112127   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.112132   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.291598   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.710367476s)
	I0311 21:39:50.291651   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.291671   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292020   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292036   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292044   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.292050   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292287   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.292328   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292352   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292367   70604 addons.go:470] Verifying addon metrics-server=true in "embed-certs-743937"
	I0311 21:39:50.294192   70604 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:39:50.295405   70604 addons.go:505] duration metric: took 2.320766016s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:39:50.339623   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:50.339651   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339658   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339665   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:50.339671   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:50.339677   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:50.339682   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:50.339688   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:50.339695   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:50.339704   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:50.339728   70604 retry.go:31] will retry after 674.573171ms: missing components: kube-dns
	I0311 21:39:51.021666   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.021704   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021716   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021723   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.021731   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.021743   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.021754   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.021760   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.021772   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.021786   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:51.021805   70604 retry.go:31] will retry after 716.470399ms: missing components: kube-dns
	I0311 21:39:51.745786   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.745818   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745829   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745840   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.745849   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.745855   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.745861   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.745867   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.745876   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.745886   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:51.745904   70604 retry.go:31] will retry after 873.920018ms: missing components: kube-dns
	I0311 21:39:52.627896   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:52.627922   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Running
	I0311 21:39:52.627927   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Running
	I0311 21:39:52.627932   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:52.627936   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:52.627941   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:52.627944   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:52.627948   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:52.627954   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:52.627958   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:52.627966   70604 system_pods.go:126] duration metric: took 4.025903884s to wait for k8s-apps to be running ...
	I0311 21:39:52.627976   70604 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:39:52.628017   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:52.643356   70604 system_svc.go:56] duration metric: took 15.371853ms WaitForService to wait for kubelet
	I0311 21:39:52.643378   70604 kubeadm.go:576] duration metric: took 4.668777182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:39:52.643394   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:39:52.646844   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:39:52.646862   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:39:52.646871   70604 node_conditions.go:105] duration metric: took 3.47245ms to run NodePressure ...
	I0311 21:39:52.646881   70604 start.go:240] waiting for startup goroutines ...
	I0311 21:39:52.646891   70604 start.go:245] waiting for cluster config update ...
	I0311 21:39:52.646904   70604 start.go:254] writing updated cluster config ...
	I0311 21:39:52.647207   70604 ssh_runner.go:195] Run: rm -f paused
	I0311 21:39:52.697687   70604 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:39:52.699641   70604 out.go:177] * Done! kubectl is now configured to use "embed-certs-743937" cluster and "default" namespace by default
	I0311 21:40:09.411155   70417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.467938624s)
	I0311 21:40:09.411245   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:09.429951   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:40:09.442265   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:09.453883   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:09.453899   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:09.453934   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:40:09.465106   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:09.465161   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:09.476155   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:40:09.487366   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:09.487413   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:09.497877   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.508056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:09.508096   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.518709   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:40:09.529005   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:09.529039   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:09.539755   70417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:09.601265   70417 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:40:09.601399   70417 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:09.771387   70417 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:09.771548   70417 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:09.771653   70417 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:10.016610   70417 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:10.018526   70417 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:10.018613   70417 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:10.018670   70417 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:10.018752   70417 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:10.018830   70417 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:10.018926   70417 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:10.019019   70417 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:10.019436   70417 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:10.019924   70417 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:10.020435   70417 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:10.020949   70417 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:10.021470   70417 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:10.021550   70417 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:10.087827   70417 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:10.326702   70417 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:10.515476   70417 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:10.585573   70417 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:10.586277   70417 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:10.588784   70417 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:10.590786   70417 out.go:204]   - Booting up control plane ...
	I0311 21:40:10.590969   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:10.591080   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:10.591164   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:10.613086   70417 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:10.613187   70417 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:10.613224   70417 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:10.753737   70417 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:17.258016   70417 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503151 seconds
	I0311 21:40:17.258170   70417 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:40:17.276142   70417 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:40:17.805116   70417 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:40:17.805383   70417 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-766430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:40:18.323836   70417 kubeadm.go:309] [bootstrap-token] Using token: 9sjslg.sf5b1bfk3wp77z35
	I0311 21:40:18.325382   70417 out.go:204]   - Configuring RBAC rules ...
	I0311 21:40:18.325478   70417 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:40:18.331585   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:40:18.344341   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:40:18.348362   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:40:18.352181   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:40:18.363299   70417 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:40:18.377835   70417 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:40:18.612013   70417 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:40:18.755215   70417 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:40:18.755235   70417 kubeadm.go:309] 
	I0311 21:40:18.755300   70417 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:40:18.755314   70417 kubeadm.go:309] 
	I0311 21:40:18.755434   70417 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:40:18.755460   70417 kubeadm.go:309] 
	I0311 21:40:18.755490   70417 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:40:18.755571   70417 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:40:18.755636   70417 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:40:18.755647   70417 kubeadm.go:309] 
	I0311 21:40:18.755721   70417 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:40:18.755731   70417 kubeadm.go:309] 
	I0311 21:40:18.755794   70417 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:40:18.755804   70417 kubeadm.go:309] 
	I0311 21:40:18.755876   70417 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:40:18.755941   70417 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:40:18.756010   70417 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:40:18.756029   70417 kubeadm.go:309] 
	I0311 21:40:18.756152   70417 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:40:18.756267   70417 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:40:18.756277   70417 kubeadm.go:309] 
	I0311 21:40:18.756391   70417 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.756533   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:40:18.756578   70417 kubeadm.go:309] 	--control-plane 
	I0311 21:40:18.756585   70417 kubeadm.go:309] 
	I0311 21:40:18.756695   70417 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:40:18.756706   70417 kubeadm.go:309] 
	I0311 21:40:18.756844   70417 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.757021   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:40:18.759444   70417 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:40:18.759474   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:40:18.759489   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:40:18.761354   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:40:18.762676   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:40:18.793496   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:40:18.840426   70417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-766430 minikube.k8s.io/updated_at=2024_03_11T21_40_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=default-k8s-diff-port-766430 minikube.k8s.io/primary=true
	I0311 21:40:19.150012   70417 ops.go:34] apiserver oom_adj: -16
	I0311 21:40:19.150129   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:19.650947   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.150969   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.650687   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.150849   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.650356   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.150737   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.650225   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.150390   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.650650   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.151081   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.650689   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.150428   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.650265   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.150198   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.650610   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.150325   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.650794   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.150855   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.650819   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.150345   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.650746   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.150910   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.650742   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.790472   70417 kubeadm.go:1106] duration metric: took 11.95003413s to wait for elevateKubeSystemPrivileges
	W0311 21:40:30.790506   70417 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:40:30.790513   70417 kubeadm.go:393] duration metric: took 5m14.024392605s to StartCluster
	I0311 21:40:30.790527   70417 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.790630   70417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:40:30.792582   70417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.792843   70417 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:40:30.794425   70417 out.go:177] * Verifying Kubernetes components...
	I0311 21:40:30.792920   70417 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:40:30.793051   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:40:30.796119   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:40:30.796129   70417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796160   70417 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796171   70417 addons.go:243] addon metrics-server should already be in state true
	I0311 21:40:30.796197   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796121   70417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796127   70417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796237   70417 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796253   70417 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:40:30.796268   70417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766430"
	I0311 21:40:30.796278   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796663   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796694   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796699   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796722   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796777   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796807   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.812156   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0311 21:40:30.812601   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.813108   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.813138   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.813532   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.813995   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.814031   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.816427   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0311 21:40:30.816626   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0311 21:40:30.816863   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817015   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817365   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817385   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817532   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817557   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817905   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.817908   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.818696   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.819070   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.819100   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.822839   70417 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.822858   70417 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:40:30.822885   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.823188   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.823202   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.834007   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0311 21:40:30.834521   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.835017   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.835033   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.835418   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.835620   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.837838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.839548   70417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:40:30.838397   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0311 21:40:30.840244   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0311 21:40:30.840869   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:40:30.840885   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:40:30.840904   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.841295   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841345   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841877   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.841894   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.841994   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.842012   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.842246   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.842448   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842960   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.842985   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.844184   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.844406   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.845769   70417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:40:30.847105   70417 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:30.844838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.847124   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:40:30.847142   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.845110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.847151   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.847302   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.847424   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.847550   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.849856   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850205   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.850232   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.850575   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.850697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.850835   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.861464   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0311 21:40:30.861799   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.862252   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.862271   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.862655   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.862818   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.864692   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.864956   70417 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:30.864978   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:40:30.864996   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.867548   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.867980   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.868013   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.868140   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.868300   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.868433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.868558   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:31.037958   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:40:31.081173   70417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103697   70417 node_ready.go:49] node "default-k8s-diff-port-766430" has status "Ready":"True"
	I0311 21:40:31.103717   70417 node_ready.go:38] duration metric: took 22.519334ms for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103726   70417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:31.129595   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:31.184749   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:40:31.184771   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:40:31.194340   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:31.213567   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:40:31.213589   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:40:31.255647   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.255667   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:40:31.284917   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.309356   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:32.792293   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.597920266s)
	I0311 21:40:32.792337   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.792625   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.792686   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.792703   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.792714   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792724   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.793060   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.793086   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.793137   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811230   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.811254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.811583   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811587   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.811606   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.156126   70417 pod_ready.go:92] pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.156148   70417 pod_ready.go:81] duration metric: took 2.026531002s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.156156   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174226   70417 pod_ready.go:92] pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.174248   70417 pod_ready.go:81] duration metric: took 18.0858ms for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174257   70417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186296   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.186329   70417 pod_ready.go:81] duration metric: took 12.06396ms for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186344   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195902   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.195930   70417 pod_ready.go:81] duration metric: took 9.577334ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195945   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203134   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.203160   70417 pod_ready.go:81] duration metric: took 7.205172ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203174   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.449290   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.164324973s)
	I0311 21:40:33.449341   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.139948099s)
	I0311 21:40:33.449374   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449392   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449346   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449461   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449662   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449678   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449688   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.449795   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449810   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449823   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449836   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449886   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449905   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450213   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450256   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.450263   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.450272   70417 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-766430"
	I0311 21:40:33.453444   70417 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0311 21:40:33.454670   70417 addons.go:505] duration metric: took 2.661756652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0311 21:40:33.534893   70417 pod_ready.go:92] pod "kube-proxy-t4fwc" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.534915   70417 pod_ready.go:81] duration metric: took 331.733613ms for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.534924   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933950   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.933973   70417 pod_ready.go:81] duration metric: took 399.042085ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933981   70417 pod_ready.go:38] duration metric: took 2.830245804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:33.933994   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:40:33.934053   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:40:33.953607   70417 api_server.go:72] duration metric: took 3.160728268s to wait for apiserver process to appear ...
	I0311 21:40:33.953629   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:40:33.953650   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:40:33.959064   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:40:33.960101   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:40:33.960125   70417 api_server.go:131] duration metric: took 6.489682ms to wait for apiserver health ...
	I0311 21:40:33.960135   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:40:34.137026   70417 system_pods.go:59] 9 kube-system pods found
	I0311 21:40:34.137061   70417 system_pods.go:61] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.137079   70417 system_pods.go:61] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.137086   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.137093   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.137100   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.137105   70417 system_pods.go:61] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.137111   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.137121   70417 system_pods.go:61] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.137133   70417 system_pods.go:61] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.137147   70417 system_pods.go:74] duration metric: took 177.004603ms to wait for pod list to return data ...
	I0311 21:40:34.137201   70417 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:40:34.333563   70417 default_sa.go:45] found service account: "default"
	I0311 21:40:34.333589   70417 default_sa.go:55] duration metric: took 196.374123ms for default service account to be created ...
	I0311 21:40:34.333600   70417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:40:34.537376   70417 system_pods.go:86] 9 kube-system pods found
	I0311 21:40:34.537401   70417 system_pods.go:89] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.537406   70417 system_pods.go:89] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.537411   70417 system_pods.go:89] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.537415   70417 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.537420   70417 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.537423   70417 system_pods.go:89] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.537427   70417 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.537433   70417 system_pods.go:89] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.537438   70417 system_pods.go:89] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.537447   70417 system_pods.go:126] duration metric: took 203.840784ms to wait for k8s-apps to be running ...
	I0311 21:40:34.537453   70417 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:40:34.537493   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:34.555483   70417 system_svc.go:56] duration metric: took 18.021595ms WaitForService to wait for kubelet
	I0311 21:40:34.555511   70417 kubeadm.go:576] duration metric: took 3.76263503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:40:34.555534   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:40:34.735214   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:40:34.735238   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:40:34.735248   70417 node_conditions.go:105] duration metric: took 179.707447ms to run NodePressure ...
	I0311 21:40:34.735258   70417 start.go:240] waiting for startup goroutines ...
	I0311 21:40:34.735264   70417 start.go:245] waiting for cluster config update ...
	I0311 21:40:34.735274   70417 start.go:254] writing updated cluster config ...
	I0311 21:40:34.735539   70417 ssh_runner.go:195] Run: rm -f paused
	I0311 21:40:34.782710   70417 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:40:34.784627   70417 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-766430" cluster and "default" namespace by default
	I0311 21:40:56.380462   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:40:56.380539   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:40:56.382217   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:56.382264   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:56.382349   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:56.382450   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:56.382619   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:56.382712   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:56.384498   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:56.384579   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:56.384636   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:56.384766   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:56.384863   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:56.384967   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:56.385037   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:56.385139   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:56.385208   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:56.385281   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:56.385357   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:56.385408   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:56.385492   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:56.385567   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:56.385644   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:56.385769   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:56.385855   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:56.385962   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:56.386053   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:56.386104   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:56.386184   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:56.387594   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:56.387671   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:56.387738   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:56.387811   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:56.387914   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:56.388107   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:56.388182   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:40:56.388297   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388522   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388614   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388844   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388914   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389074   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389131   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389314   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389405   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389594   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389603   70908 kubeadm.go:309] 
	I0311 21:40:56.389653   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:40:56.389720   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:40:56.389732   70908 kubeadm.go:309] 
	I0311 21:40:56.389779   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:40:56.389811   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:40:56.389924   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:40:56.389933   70908 kubeadm.go:309] 
	I0311 21:40:56.390058   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:40:56.390109   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:40:56.390150   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:40:56.390159   70908 kubeadm.go:309] 
	I0311 21:40:56.390299   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:40:56.390395   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:40:56.390409   70908 kubeadm.go:309] 
	I0311 21:40:56.390512   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:40:56.390603   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:40:56.390702   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:40:56.390803   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:40:56.390833   70908 kubeadm.go:309] 
	W0311 21:40:56.390936   70908 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:40:56.390995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:40:56.941058   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:56.958276   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:56.970464   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:56.970493   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:56.970552   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:40:56.983314   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:56.983372   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:56.993791   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:40:57.004040   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:57.004098   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:57.014471   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.024751   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:57.024805   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.035389   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:40:57.045511   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:57.045556   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:57.056774   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:57.140620   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:57.140789   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:57.310076   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:57.310193   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:57.310280   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:57.506834   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:57.509261   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:57.509362   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:57.509446   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:57.509576   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:57.509669   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:57.509765   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:57.509839   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:57.509949   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:57.510004   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:57.510109   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:57.510231   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:57.510274   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:57.510361   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:57.585562   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:57.644460   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:57.784382   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:57.848952   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:57.867302   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:57.867791   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:57.867864   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:58.036523   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:58.039051   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:58.039176   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:58.054234   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:58.055548   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:58.057378   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:58.060167   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:41:38.062360   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:41:38.062886   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:38.063137   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:43.063592   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:43.063788   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:53.064505   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:53.064773   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:13.065744   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:13.065995   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.066718   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:53.067030   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.067070   70908 kubeadm.go:309] 
	I0311 21:42:53.067135   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:42:53.067191   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:42:53.067203   70908 kubeadm.go:309] 
	I0311 21:42:53.067259   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:42:53.067318   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:42:53.067456   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:42:53.067466   70908 kubeadm.go:309] 
	I0311 21:42:53.067590   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:42:53.067650   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:42:53.067724   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:42:53.067735   70908 kubeadm.go:309] 
	I0311 21:42:53.067889   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:42:53.068021   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:42:53.068036   70908 kubeadm.go:309] 
	I0311 21:42:53.068169   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:42:53.068297   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:42:53.068412   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:42:53.068512   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:42:53.068523   70908 kubeadm.go:309] 
	I0311 21:42:53.069455   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:42:53.069572   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:42:53.069682   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:42:53.069775   70908 kubeadm.go:393] duration metric: took 7m58.960224884s to StartCluster
	I0311 21:42:53.069833   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:42:53.069899   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:42:53.120459   70908 cri.go:89] found id: ""
	I0311 21:42:53.120486   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.120497   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:42:53.120505   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:42:53.120564   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:42:53.159639   70908 cri.go:89] found id: ""
	I0311 21:42:53.159667   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.159676   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:42:53.159682   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:42:53.159738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:42:53.199584   70908 cri.go:89] found id: ""
	I0311 21:42:53.199607   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.199614   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:42:53.199619   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:42:53.199676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:42:53.238868   70908 cri.go:89] found id: ""
	I0311 21:42:53.238901   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.238908   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:42:53.238917   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:42:53.238963   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:42:53.282172   70908 cri.go:89] found id: ""
	I0311 21:42:53.282205   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.282216   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:42:53.282225   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:42:53.282278   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:42:53.318450   70908 cri.go:89] found id: ""
	I0311 21:42:53.318481   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.318491   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:42:53.318499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:42:53.318559   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:42:53.360887   70908 cri.go:89] found id: ""
	I0311 21:42:53.360913   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.360923   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:42:53.360930   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:42:53.361027   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:42:53.414181   70908 cri.go:89] found id: ""
	I0311 21:42:53.414209   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.414220   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:42:53.414232   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:42:53.414247   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:42:53.478658   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:42:53.478689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:42:53.494577   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:42:53.494604   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:42:53.586460   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:42:53.586483   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:42:53.586500   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:42:53.697218   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:42:53.697251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0311 21:42:53.746291   70908 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:42:53.746336   70908 out.go:239] * 
	W0311 21:42:53.746388   70908 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.746409   70908 out.go:239] * 
	W0311 21:42:53.747362   70908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:42:53.750888   70908 out.go:177] 
	W0311 21:42:53.752146   70908 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.752211   70908 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:42:53.752239   70908 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:42:53.753832   70908 out.go:177] 
	
	
	==> CRI-O <==
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.846077965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193918846053522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5da6f5c-aed2-4877-8c28-e5dd0c6e0e82 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.846539208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90fb7cba-e02c-4e1f-81f3-f02aaf73d6ad name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.846625076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90fb7cba-e02c-4e1f-81f3-f02aaf73d6ad name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.846659666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=90fb7cba-e02c-4e1f-81f3-f02aaf73d6ad name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.880932384Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e6a328b-8d3f-480c-b529-20d934ed5cb6 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.881023287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e6a328b-8d3f-480c-b529-20d934ed5cb6 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.882451461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=550aa21d-5596-4b45-a492-6c93b4a26ff8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.882993292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193918882967984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=550aa21d-5596-4b45-a492-6c93b4a26ff8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.883453204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98aff5f9-acc7-4dbf-bcaa-9a550d75bf0a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.883535096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98aff5f9-acc7-4dbf-bcaa-9a550d75bf0a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.883568988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=98aff5f9-acc7-4dbf-bcaa-9a550d75bf0a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.919746945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=582f45a2-7332-40e8-94e0-62f2558d3465 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.919840041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=582f45a2-7332-40e8-94e0-62f2558d3465 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.921367627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24a44e21-60f0-431b-a939-255056f88972 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.921936864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193918921907085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24a44e21-60f0-431b-a939-255056f88972 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.922377605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d3839f4-2412-4c1c-9b94-f9cdb6e06b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.922443862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d3839f4-2412-4c1c-9b94-f9cdb6e06b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.922478956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d3839f4-2412-4c1c-9b94-f9cdb6e06b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.959137391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=084a3930-1621-4b67-a077-f865fb0def8a name=/runtime.v1.RuntimeService/Version
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.959235173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=084a3930-1621-4b67-a077-f865fb0def8a name=/runtime.v1.RuntimeService/Version
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.960526560Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa8a298b-6ed5-43f7-803d-95b80f9be040 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.961000407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710193918960967181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa8a298b-6ed5-43f7-803d-95b80f9be040 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.961655637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4344abe-c013-4c80-9492-d71ba3efe55e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.961790742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4344abe-c013-4c80-9492-d71ba3efe55e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:51:58 old-k8s-version-239315 crio[648]: time="2024-03-11 21:51:58.961825170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b4344abe-c013-4c80-9492-d71ba3efe55e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar11 21:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053511] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047458] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.912778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.895538] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.801193] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.918843] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.060085] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078339] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.210226] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.161588] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.299563] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.096564] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.072356] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.134589] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Mar11 21:35] kauditd_printk_skb: 46 callbacks suppressed
	[Mar11 21:39] systemd-fstab-generator[4995]: Ignoring "noauto" option for root device
	[Mar11 21:40] systemd-fstab-generator[5275]: Ignoring "noauto" option for root device
	[  +0.073343] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:51:59 up 17 min,  0 users,  load average: 0.04, 0.04, 0.05
	Linux old-k8s-version-239315 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: net.(*Dialer).DialContext(0xc0001c0780, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d88d80, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0009ae2c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d88d80, 0x24, 0x60, 0x7fac84f95b30, 0x118, ...)
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: net/http.(*Transport).dial(0xc0002e7b80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d88d80, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: net/http.(*Transport).dialConn(0xc0002e7b80, 0x4f7fe00, 0xc000120018, 0x0, 0xc000cb36e0, 0x5, 0xc000d88d80, 0x24, 0x0, 0xc000bcfd40, ...)
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: net/http.(*Transport).dialConnFor(0xc0002e7b80, 0xc00089d130)
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: created by net/http.(*Transport).queueForDial
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: E0311 21:51:57.639509    6477 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.52:8443: connect: connection refused
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: E0311 21:51:57.639582    6477 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-239315&limit=500&resourceVersion=0": dial tcp 192.168.72.52:8443: connect: connection refused
	Mar 11 21:51:57 old-k8s-version-239315 kubelet[6477]: E0311 21:51:57.639624    6477 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-239315&limit=500&resourceVersion=0": dial tcp 192.168.72.52:8443: connect: connection refused
	Mar 11 21:51:57 old-k8s-version-239315 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 11 21:51:57 old-k8s-version-239315 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 11 21:51:58 old-k8s-version-239315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 11 21:51:58 old-k8s-version-239315 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 11 21:51:58 old-k8s-version-239315 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 11 21:51:58 old-k8s-version-239315 kubelet[6505]: I0311 21:51:58.394875    6505 server.go:416] Version: v1.20.0
	Mar 11 21:51:58 old-k8s-version-239315 kubelet[6505]: I0311 21:51:58.395388    6505 server.go:837] Client rotation is on, will bootstrap in background
	Mar 11 21:51:58 old-k8s-version-239315 kubelet[6505]: I0311 21:51:58.397960    6505 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 11 21:51:58 old-k8s-version-239315 kubelet[6505]: I0311 21:51:58.399456    6505 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 11 21:51:58 old-k8s-version-239315 kubelet[6505]: W0311 21:51:58.399471    6505 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 2 (276.819023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-239315" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (383.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-324578 -n no-preload-324578
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-11 21:54:21.746751783 +0000 UTC m=+6269.918426092
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-324578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-324578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.51µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-324578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-324578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-324578 logs -n 25: (1.379236617s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo find                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo crio                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-427678                                       | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-124446 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | disable-driver-mounts-124446                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:26 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-766430  | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-324578             | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:53 UTC | 11 Mar 24 21:53 UTC |
	| start   | -p newest-cni-649653 --memory=2200 --alsologtostderr   | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:53 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:53:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:53:29.936719   75727 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:53:29.936864   75727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:53:29.936877   75727 out.go:304] Setting ErrFile to fd 2...
	I0311 21:53:29.936883   75727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:53:29.937117   75727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:53:29.937767   75727 out.go:298] Setting JSON to false
	I0311 21:53:29.938704   75727 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9359,"bootTime":1710184651,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:53:29.938760   75727 start.go:139] virtualization: kvm guest
	I0311 21:53:29.941562   75727 out.go:177] * [newest-cni-649653] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:53:29.943397   75727 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:53:29.943339   75727 notify.go:220] Checking for updates...
	I0311 21:53:29.946238   75727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:53:29.947621   75727 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:53:29.948958   75727 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:53:29.950257   75727 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:53:29.951747   75727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:53:29.953649   75727 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:53:29.953801   75727 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:53:29.953953   75727 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:53:29.954064   75727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:53:29.992804   75727 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 21:53:29.994030   75727 start.go:297] selected driver: kvm2
	I0311 21:53:29.994050   75727 start.go:901] validating driver "kvm2" against <nil>
	I0311 21:53:29.994061   75727 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:53:29.994759   75727 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:53:29.994826   75727 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:53:30.011256   75727 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:53:30.011317   75727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0311 21:53:30.011348   75727 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0311 21:53:30.011558   75727 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 21:53:30.011586   75727 cni.go:84] Creating CNI manager for ""
	I0311 21:53:30.011593   75727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:53:30.011599   75727 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 21:53:30.011672   75727 start.go:340] cluster config:
	{Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:53:30.011763   75727 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:53:30.013376   75727 out.go:177] * Starting "newest-cni-649653" primary control-plane node in "newest-cni-649653" cluster
	I0311 21:53:30.014694   75727 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:53:30.014724   75727 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0311 21:53:30.014731   75727 cache.go:56] Caching tarball of preloaded images
	I0311 21:53:30.014827   75727 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:53:30.014840   75727 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0311 21:53:30.014948   75727 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/config.json ...
	I0311 21:53:30.014966   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/config.json: {Name:mk51ceabf4fcf900816338d68a850020f60e97dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:53:30.015094   75727 start.go:360] acquireMachinesLock for newest-cni-649653: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:53:30.015120   75727 start.go:364] duration metric: took 14.071µs to acquireMachinesLock for "newest-cni-649653"
	I0311 21:53:30.015136   75727 start.go:93] Provisioning new machine with config: &{Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:53:30.015233   75727 start.go:125] createHost starting for "" (driver="kvm2")
	I0311 21:53:30.016995   75727 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 21:53:30.017159   75727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:53:30.017212   75727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:53:30.031426   75727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0311 21:53:30.031855   75727 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:53:30.032477   75727 main.go:141] libmachine: Using API Version  1
	I0311 21:53:30.032501   75727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:53:30.032861   75727 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:53:30.033071   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:53:30.033240   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:30.033407   75727 start.go:159] libmachine.API.Create for "newest-cni-649653" (driver="kvm2")
	I0311 21:53:30.033436   75727 client.go:168] LocalClient.Create starting
	I0311 21:53:30.033472   75727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 21:53:30.033506   75727 main.go:141] libmachine: Decoding PEM data...
	I0311 21:53:30.033530   75727 main.go:141] libmachine: Parsing certificate...
	I0311 21:53:30.033598   75727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 21:53:30.033642   75727 main.go:141] libmachine: Decoding PEM data...
	I0311 21:53:30.033659   75727 main.go:141] libmachine: Parsing certificate...
	I0311 21:53:30.033682   75727 main.go:141] libmachine: Running pre-create checks...
	I0311 21:53:30.033707   75727 main.go:141] libmachine: (newest-cni-649653) Calling .PreCreateCheck
	I0311 21:53:30.034058   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetConfigRaw
	I0311 21:53:30.034703   75727 main.go:141] libmachine: Creating machine...
	I0311 21:53:30.034741   75727 main.go:141] libmachine: (newest-cni-649653) Calling .Create
	I0311 21:53:30.034960   75727 main.go:141] libmachine: (newest-cni-649653) Creating KVM machine...
	I0311 21:53:30.037113   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found existing default KVM network
	I0311 21:53:30.038315   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.038151   75749 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8c:65:64} reservation:<nil>}
	I0311 21:53:30.039336   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.039197   75749 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:2b:c4} reservation:<nil>}
	I0311 21:53:30.040105   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.040023   75749 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:c8:e3} reservation:<nil>}
	I0311 21:53:30.041171   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.041092   75749 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289890}
	I0311 21:53:30.041200   75727 main.go:141] libmachine: (newest-cni-649653) DBG | created network xml: 
	I0311 21:53:30.041213   75727 main.go:141] libmachine: (newest-cni-649653) DBG | <network>
	I0311 21:53:30.041231   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   <name>mk-newest-cni-649653</name>
	I0311 21:53:30.041255   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   <dns enable='no'/>
	I0311 21:53:30.041279   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   
	I0311 21:53:30.041290   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0311 21:53:30.041301   75727 main.go:141] libmachine: (newest-cni-649653) DBG |     <dhcp>
	I0311 21:53:30.041358   75727 main.go:141] libmachine: (newest-cni-649653) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0311 21:53:30.041382   75727 main.go:141] libmachine: (newest-cni-649653) DBG |     </dhcp>
	I0311 21:53:30.041405   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   </ip>
	I0311 21:53:30.041415   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   
	I0311 21:53:30.041423   75727 main.go:141] libmachine: (newest-cni-649653) DBG | </network>
	I0311 21:53:30.041433   75727 main.go:141] libmachine: (newest-cni-649653) DBG | 
	I0311 21:53:30.046411   75727 main.go:141] libmachine: (newest-cni-649653) DBG | trying to create private KVM network mk-newest-cni-649653 192.168.72.0/24...
	I0311 21:53:30.118483   75727 main.go:141] libmachine: (newest-cni-649653) DBG | private KVM network mk-newest-cni-649653 192.168.72.0/24 created
	I0311 21:53:30.118580   75727 main.go:141] libmachine: (newest-cni-649653) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653 ...
	I0311 21:53:30.118670   75727 main.go:141] libmachine: (newest-cni-649653) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 21:53:30.118699   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.118631   75749 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:53:30.118796   75727 main.go:141] libmachine: (newest-cni-649653) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 21:53:30.368677   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.368544   75749 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa...
	I0311 21:53:30.423818   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.423705   75749 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/newest-cni-649653.rawdisk...
	I0311 21:53:30.423863   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Writing magic tar header
	I0311 21:53:30.423883   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Writing SSH key tar header
	I0311 21:53:30.423949   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.423885   75749 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653 ...
	I0311 21:53:30.424051   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653
	I0311 21:53:30.424074   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 21:53:30.424089   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653 (perms=drwx------)
	I0311 21:53:30.424141   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:53:30.424168   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 21:53:30.424185   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 21:53:30.424201   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 21:53:30.424214   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 21:53:30.424231   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 21:53:30.424242   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 21:53:30.424250   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 21:53:30.424261   75727 main.go:141] libmachine: (newest-cni-649653) Creating domain...
	I0311 21:53:30.424274   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins
	I0311 21:53:30.424284   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home
	I0311 21:53:30.424294   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Skipping /home - not owner
	I0311 21:53:30.425336   75727 main.go:141] libmachine: (newest-cni-649653) define libvirt domain using xml: 
	I0311 21:53:30.425361   75727 main.go:141] libmachine: (newest-cni-649653) <domain type='kvm'>
	I0311 21:53:30.425388   75727 main.go:141] libmachine: (newest-cni-649653)   <name>newest-cni-649653</name>
	I0311 21:53:30.425422   75727 main.go:141] libmachine: (newest-cni-649653)   <memory unit='MiB'>2200</memory>
	I0311 21:53:30.425435   75727 main.go:141] libmachine: (newest-cni-649653)   <vcpu>2</vcpu>
	I0311 21:53:30.425445   75727 main.go:141] libmachine: (newest-cni-649653)   <features>
	I0311 21:53:30.425457   75727 main.go:141] libmachine: (newest-cni-649653)     <acpi/>
	I0311 21:53:30.425469   75727 main.go:141] libmachine: (newest-cni-649653)     <apic/>
	I0311 21:53:30.425477   75727 main.go:141] libmachine: (newest-cni-649653)     <pae/>
	I0311 21:53:30.425490   75727 main.go:141] libmachine: (newest-cni-649653)     
	I0311 21:53:30.425502   75727 main.go:141] libmachine: (newest-cni-649653)   </features>
	I0311 21:53:30.425511   75727 main.go:141] libmachine: (newest-cni-649653)   <cpu mode='host-passthrough'>
	I0311 21:53:30.425522   75727 main.go:141] libmachine: (newest-cni-649653)   
	I0311 21:53:30.425529   75727 main.go:141] libmachine: (newest-cni-649653)   </cpu>
	I0311 21:53:30.425541   75727 main.go:141] libmachine: (newest-cni-649653)   <os>
	I0311 21:53:30.425548   75727 main.go:141] libmachine: (newest-cni-649653)     <type>hvm</type>
	I0311 21:53:30.425578   75727 main.go:141] libmachine: (newest-cni-649653)     <boot dev='cdrom'/>
	I0311 21:53:30.425602   75727 main.go:141] libmachine: (newest-cni-649653)     <boot dev='hd'/>
	I0311 21:53:30.425612   75727 main.go:141] libmachine: (newest-cni-649653)     <bootmenu enable='no'/>
	I0311 21:53:30.425622   75727 main.go:141] libmachine: (newest-cni-649653)   </os>
	I0311 21:53:30.425629   75727 main.go:141] libmachine: (newest-cni-649653)   <devices>
	I0311 21:53:30.425639   75727 main.go:141] libmachine: (newest-cni-649653)     <disk type='file' device='cdrom'>
	I0311 21:53:30.425653   75727 main.go:141] libmachine: (newest-cni-649653)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/boot2docker.iso'/>
	I0311 21:53:30.425678   75727 main.go:141] libmachine: (newest-cni-649653)       <target dev='hdc' bus='scsi'/>
	I0311 21:53:30.425691   75727 main.go:141] libmachine: (newest-cni-649653)       <readonly/>
	I0311 21:53:30.425702   75727 main.go:141] libmachine: (newest-cni-649653)     </disk>
	I0311 21:53:30.425716   75727 main.go:141] libmachine: (newest-cni-649653)     <disk type='file' device='disk'>
	I0311 21:53:30.425729   75727 main.go:141] libmachine: (newest-cni-649653)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 21:53:30.425748   75727 main.go:141] libmachine: (newest-cni-649653)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/newest-cni-649653.rawdisk'/>
	I0311 21:53:30.425767   75727 main.go:141] libmachine: (newest-cni-649653)       <target dev='hda' bus='virtio'/>
	I0311 21:53:30.425778   75727 main.go:141] libmachine: (newest-cni-649653)     </disk>
	I0311 21:53:30.425787   75727 main.go:141] libmachine: (newest-cni-649653)     <interface type='network'>
	I0311 21:53:30.425796   75727 main.go:141] libmachine: (newest-cni-649653)       <source network='mk-newest-cni-649653'/>
	I0311 21:53:30.425803   75727 main.go:141] libmachine: (newest-cni-649653)       <model type='virtio'/>
	I0311 21:53:30.425813   75727 main.go:141] libmachine: (newest-cni-649653)     </interface>
	I0311 21:53:30.425821   75727 main.go:141] libmachine: (newest-cni-649653)     <interface type='network'>
	I0311 21:53:30.425834   75727 main.go:141] libmachine: (newest-cni-649653)       <source network='default'/>
	I0311 21:53:30.425849   75727 main.go:141] libmachine: (newest-cni-649653)       <model type='virtio'/>
	I0311 21:53:30.425861   75727 main.go:141] libmachine: (newest-cni-649653)     </interface>
	I0311 21:53:30.425876   75727 main.go:141] libmachine: (newest-cni-649653)     <serial type='pty'>
	I0311 21:53:30.425889   75727 main.go:141] libmachine: (newest-cni-649653)       <target port='0'/>
	I0311 21:53:30.425900   75727 main.go:141] libmachine: (newest-cni-649653)     </serial>
	I0311 21:53:30.425912   75727 main.go:141] libmachine: (newest-cni-649653)     <console type='pty'>
	I0311 21:53:30.425928   75727 main.go:141] libmachine: (newest-cni-649653)       <target type='serial' port='0'/>
	I0311 21:53:30.425940   75727 main.go:141] libmachine: (newest-cni-649653)     </console>
	I0311 21:53:30.425951   75727 main.go:141] libmachine: (newest-cni-649653)     <rng model='virtio'>
	I0311 21:53:30.425964   75727 main.go:141] libmachine: (newest-cni-649653)       <backend model='random'>/dev/random</backend>
	I0311 21:53:30.425971   75727 main.go:141] libmachine: (newest-cni-649653)     </rng>
	I0311 21:53:30.425981   75727 main.go:141] libmachine: (newest-cni-649653)     
	I0311 21:53:30.425997   75727 main.go:141] libmachine: (newest-cni-649653)     
	I0311 21:53:30.426009   75727 main.go:141] libmachine: (newest-cni-649653)   </devices>
	I0311 21:53:30.426019   75727 main.go:141] libmachine: (newest-cni-649653) </domain>
	I0311 21:53:30.426028   75727 main.go:141] libmachine: (newest-cni-649653) 
	I0311 21:53:30.429994   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:dd:5f:e6 in network default
	I0311 21:53:30.430524   75727 main.go:141] libmachine: (newest-cni-649653) Ensuring networks are active...
	I0311 21:53:30.430578   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:30.431155   75727 main.go:141] libmachine: (newest-cni-649653) Ensuring network default is active
	I0311 21:53:30.431449   75727 main.go:141] libmachine: (newest-cni-649653) Ensuring network mk-newest-cni-649653 is active
	I0311 21:53:30.432000   75727 main.go:141] libmachine: (newest-cni-649653) Getting domain xml...
	I0311 21:53:30.432810   75727 main.go:141] libmachine: (newest-cni-649653) Creating domain...
	I0311 21:53:31.672132   75727 main.go:141] libmachine: (newest-cni-649653) Waiting to get IP...
	I0311 21:53:31.672912   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:31.673333   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:31.673354   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:31.673308   75749 retry.go:31] will retry after 191.593411ms: waiting for machine to come up
	I0311 21:53:31.866695   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:31.867244   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:31.867273   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:31.867190   75749 retry.go:31] will retry after 294.601067ms: waiting for machine to come up
	I0311 21:53:32.163613   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:32.164073   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:32.164096   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:32.164032   75749 retry.go:31] will retry after 483.852852ms: waiting for machine to come up
	I0311 21:53:32.649724   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:32.650154   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:32.650177   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:32.650109   75749 retry.go:31] will retry after 544.965754ms: waiting for machine to come up
	I0311 21:53:33.196825   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:33.197376   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:33.197404   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:33.197324   75749 retry.go:31] will retry after 569.467974ms: waiting for machine to come up
	I0311 21:53:33.768068   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:33.768616   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:33.768651   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:33.768568   75749 retry.go:31] will retry after 785.346216ms: waiting for machine to come up
	I0311 21:53:34.555442   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:34.555941   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:34.555970   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:34.555886   75749 retry.go:31] will retry after 1.185792657s: waiting for machine to come up
	I0311 21:53:35.745218   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:35.745759   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:35.745792   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:35.745709   75749 retry.go:31] will retry after 1.045736118s: waiting for machine to come up
	I0311 21:53:36.792624   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:36.793145   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:36.793175   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:36.793084   75749 retry.go:31] will retry after 1.492296791s: waiting for machine to come up
	I0311 21:53:38.286865   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:38.287447   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:38.287477   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:38.287401   75749 retry.go:31] will retry after 1.559903644s: waiting for machine to come up
	I0311 21:53:39.849344   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:39.849874   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:39.849901   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:39.849831   75749 retry.go:31] will retry after 1.851186773s: waiting for machine to come up
	I0311 21:53:41.703721   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:41.704286   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:41.704315   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:41.704256   75749 retry.go:31] will retry after 2.461306109s: waiting for machine to come up
	I0311 21:53:44.167385   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:44.167891   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:44.167914   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:44.167852   75749 retry.go:31] will retry after 3.635340302s: waiting for machine to come up
	I0311 21:53:47.805849   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:47.806340   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:47.806354   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:47.806314   75749 retry.go:31] will retry after 5.440107138s: waiting for machine to come up
	I0311 21:53:53.247922   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.248455   75727 main.go:141] libmachine: (newest-cni-649653) Found IP for machine: 192.168.72.200
	I0311 21:53:53.248475   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has current primary IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.248481   75727 main.go:141] libmachine: (newest-cni-649653) Reserving static IP address...
	I0311 21:53:53.248956   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find host DHCP lease matching {name: "newest-cni-649653", mac: "52:54:00:de:e6:a4", ip: "192.168.72.200"} in network mk-newest-cni-649653
	I0311 21:53:53.324889   75727 main.go:141] libmachine: (newest-cni-649653) Reserved static IP address: 192.168.72.200
	I0311 21:53:53.324917   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Getting to WaitForSSH function...
	I0311 21:53:53.324925   75727 main.go:141] libmachine: (newest-cni-649653) Waiting for SSH to be available...
	I0311 21:53:53.327808   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.328227   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.328256   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.328380   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Using SSH client type: external
	I0311 21:53:53.328412   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa (-rw-------)
	I0311 21:53:53.328443   75727 main.go:141] libmachine: (newest-cni-649653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:53:53.328465   75727 main.go:141] libmachine: (newest-cni-649653) DBG | About to run SSH command:
	I0311 21:53:53.328482   75727 main.go:141] libmachine: (newest-cni-649653) DBG | exit 0
	I0311 21:53:53.456938   75727 main.go:141] libmachine: (newest-cni-649653) DBG | SSH cmd err, output: <nil>: 
	I0311 21:53:53.457206   75727 main.go:141] libmachine: (newest-cni-649653) KVM machine creation complete!
	I0311 21:53:53.457570   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetConfigRaw
	I0311 21:53:53.458093   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:53.458325   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:53.458511   75727 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 21:53:53.458532   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:53:53.459943   75727 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 21:53:53.459956   75727 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 21:53:53.459962   75727 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 21:53:53.459967   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.462138   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.462556   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.462585   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.462703   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:53.462872   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.463008   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.463150   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:53.463320   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:53.463530   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:53.463545   75727 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 21:53:53.576884   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:53:53.576911   75727 main.go:141] libmachine: Detecting the provisioner...
	I0311 21:53:53.576922   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.580079   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.580486   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.580516   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.580698   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:53.580912   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.581089   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.581262   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:53.581428   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:53.581637   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:53.581649   75727 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 21:53:53.698584   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 21:53:53.698672   75727 main.go:141] libmachine: found compatible host: buildroot
	I0311 21:53:53.698688   75727 main.go:141] libmachine: Provisioning with buildroot...
	I0311 21:53:53.698699   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:53:53.698996   75727 buildroot.go:166] provisioning hostname "newest-cni-649653"
	I0311 21:53:53.699023   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:53:53.699210   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.702170   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.702560   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.702595   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.702763   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:53.702951   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.703147   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.703342   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:53.703519   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:53.703662   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:53.703675   75727 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-649653 && echo "newest-cni-649653" | sudo tee /etc/hostname
	I0311 21:53:53.829333   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-649653
	
	I0311 21:53:53.829359   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.832115   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.832481   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.832511   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.832692   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:53.832908   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.833085   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.833218   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:53.833377   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:53.833577   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:53.833597   75727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-649653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-649653/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-649653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:53:53.951985   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:53:53.952013   75727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:53:53.952058   75727 buildroot.go:174] setting up certificates
	I0311 21:53:53.952072   75727 provision.go:84] configureAuth start
	I0311 21:53:53.952089   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:53:53.952337   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:53:53.955265   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.955545   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.955577   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.955773   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.958412   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.958775   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.958796   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.958900   75727 provision.go:143] copyHostCerts
	I0311 21:53:53.958973   75727 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:53:53.958985   75727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:53:53.959075   75727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:53:53.959184   75727 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:53:53.959196   75727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:53:53.959235   75727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:53:53.959313   75727 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:53:53.959321   75727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:53:53.959346   75727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:53:53.959395   75727 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.newest-cni-649653 san=[127.0.0.1 192.168.72.200 localhost minikube newest-cni-649653]
	I0311 21:53:54.150706   75727 provision.go:177] copyRemoteCerts
	I0311 21:53:54.150763   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:53:54.150784   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.153582   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.153935   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.153961   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.154165   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.154356   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.154536   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.154684   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:53:54.241042   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:53:54.270677   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:53:54.300851   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 21:53:54.328705   75727 provision.go:87] duration metric: took 376.618763ms to configureAuth
	I0311 21:53:54.328730   75727 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:53:54.328939   75727 config.go:182] Loaded profile config "newest-cni-649653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:53:54.329038   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.331628   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.331985   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.332015   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.332187   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.332363   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.332500   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.332673   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.332880   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:54.333072   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:54.333096   75727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:53:54.625255   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:53:54.625295   75727 main.go:141] libmachine: Checking connection to Docker...
	I0311 21:53:54.625308   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetURL
	I0311 21:53:54.626637   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Using libvirt version 6000000
	I0311 21:53:54.629212   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.629562   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.629594   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.629770   75727 main.go:141] libmachine: Docker is up and running!
	I0311 21:53:54.629789   75727 main.go:141] libmachine: Reticulating splines...
	I0311 21:53:54.629797   75727 client.go:171] duration metric: took 24.59635051s to LocalClient.Create
	I0311 21:53:54.629828   75727 start.go:167] duration metric: took 24.596423194s to libmachine.API.Create "newest-cni-649653"
	I0311 21:53:54.629840   75727 start.go:293] postStartSetup for "newest-cni-649653" (driver="kvm2")
	I0311 21:53:54.629856   75727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:53:54.629880   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.630114   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:53:54.630138   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.632260   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.632604   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.632624   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.632803   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.632969   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.633110   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.633241   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:53:54.721677   75727 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:53:54.726644   75727 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:53:54.726670   75727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:53:54.726729   75727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:53:54.726821   75727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:53:54.726943   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:53:54.738217   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:53:54.765778   75727 start.go:296] duration metric: took 135.928566ms for postStartSetup
	I0311 21:53:54.765822   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetConfigRaw
	I0311 21:53:54.766426   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:53:54.769252   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.769561   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.769587   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.769833   75727 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/config.json ...
	I0311 21:53:54.770013   75727 start.go:128] duration metric: took 24.754772148s to createHost
	I0311 21:53:54.770059   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.772273   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.772599   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.772627   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.772764   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.772947   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.773160   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.773337   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.773506   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:54.773708   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:54.773723   75727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:53:54.889621   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710194034.860062119
	
	I0311 21:53:54.889646   75727 fix.go:216] guest clock: 1710194034.860062119
	I0311 21:53:54.889656   75727 fix.go:229] Guest: 2024-03-11 21:53:54.860062119 +0000 UTC Remote: 2024-03-11 21:53:54.770035432 +0000 UTC m=+24.881905345 (delta=90.026687ms)
	I0311 21:53:54.889700   75727 fix.go:200] guest clock delta is within tolerance: 90.026687ms
	I0311 21:53:54.889710   75727 start.go:83] releasing machines lock for "newest-cni-649653", held for 24.874581271s
	I0311 21:53:54.889732   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.890018   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:53:54.892706   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.893121   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.893149   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.893289   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.893749   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.893927   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.894008   75727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:53:54.894054   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.894284   75727 ssh_runner.go:195] Run: cat /version.json
	I0311 21:53:54.894329   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.896651   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.896981   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.897010   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.897028   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.897171   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.897331   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.897481   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.897509   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.897533   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.897616   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:53:54.897778   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.897913   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.898114   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.898282   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:53:54.978389   75727 ssh_runner.go:195] Run: systemctl --version
	I0311 21:53:54.999856   75727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:53:55.165475   75727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:53:55.172574   75727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:53:55.172635   75727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:53:55.190704   75727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:53:55.190725   75727 start.go:494] detecting cgroup driver to use...
	I0311 21:53:55.190773   75727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:53:55.211452   75727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:53:55.226259   75727 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:53:55.226314   75727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:53:55.242009   75727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:53:55.257562   75727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:53:55.383455   75727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:53:55.554383   75727 docker.go:233] disabling docker service ...
	I0311 21:53:55.554456   75727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:53:55.569224   75727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:53:55.584403   75727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:53:55.715319   75727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:53:55.852371   75727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:53:55.869679   75727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:53:55.893816   75727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:53:55.893883   75727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:53:55.905816   75727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:53:55.905867   75727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:53:55.917741   75727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:53:55.929470   75727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:53:55.941689   75727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:53:55.954149   75727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:53:55.965791   75727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:53:55.965847   75727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:53:55.980599   75727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:53:55.991382   75727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:53:56.116463   75727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:53:56.274461   75727 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:53:56.274546   75727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:53:56.280509   75727 start.go:562] Will wait 60s for crictl version
	I0311 21:53:56.280587   75727 ssh_runner.go:195] Run: which crictl
	I0311 21:53:56.285398   75727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:53:56.326218   75727 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:53:56.326310   75727 ssh_runner.go:195] Run: crio --version
	I0311 21:53:56.361133   75727 ssh_runner.go:195] Run: crio --version
	I0311 21:53:56.396638   75727 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:53:56.397886   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:53:56.400681   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:56.401093   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:56.401122   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:56.401361   75727 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:53:56.406263   75727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:53:56.422239   75727 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0311 21:53:56.423576   75727 kubeadm.go:877] updating cluster {Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:53:56.423715   75727 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:53:56.423796   75727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:53:56.464027   75727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:53:56.464086   75727 ssh_runner.go:195] Run: which lz4
	I0311 21:53:56.469092   75727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:53:56.474417   75727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:53:56.474448   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0311 21:53:58.184795   75727 crio.go:444] duration metric: took 1.715725311s to copy over tarball
	I0311 21:53:58.184855   75727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:54:00.846200   75727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661323281s)
	I0311 21:54:00.846224   75727 crio.go:451] duration metric: took 2.661404275s to extract the tarball
	I0311 21:54:00.846231   75727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:54:00.889345   75727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:54:00.939615   75727 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:54:00.939644   75727 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:54:00.939654   75727 kubeadm.go:928] updating node { 192.168.72.200 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:54:00.939800   75727 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-649653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:54:00.939889   75727 ssh_runner.go:195] Run: crio config
	I0311 21:54:01.002487   75727 cni.go:84] Creating CNI manager for ""
	I0311 21:54:01.002513   75727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:54:01.002528   75727 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0311 21:54:01.002554   75727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-649653 NodeName:newest-cni-649653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:54:01.002719   75727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-649653"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:54:01.002790   75727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:54:01.014123   75727 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:54:01.014181   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:54:01.025444   75727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0311 21:54:01.044878   75727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:54:01.064168   75727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0311 21:54:01.085853   75727 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0311 21:54:01.090627   75727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:54:01.107128   75727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:54:01.244930   75727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:54:01.276382   75727 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653 for IP: 192.168.72.200
	I0311 21:54:01.276413   75727 certs.go:194] generating shared ca certs ...
	I0311 21:54:01.276434   75727 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.276630   75727 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:54:01.276698   75727 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:54:01.276712   75727 certs.go:256] generating profile certs ...
	I0311 21:54:01.276807   75727 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.key
	I0311 21:54:01.276828   75727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.crt with IP's: []
	I0311 21:54:01.627941   75727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.crt ...
	I0311 21:54:01.627971   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.crt: {Name:mkf48f6f5efea8f700b7f0c847dacf2dd1d2e015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.628143   75727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.key ...
	I0311 21:54:01.628158   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.key: {Name:mk50dccdde388046496defc6928981b552d846f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.628265   75727 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9
	I0311 21:54:01.628284   75727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt.da5ea2e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.200]
	I0311 21:54:01.828611   75727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt.da5ea2e9 ...
	I0311 21:54:01.828638   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt.da5ea2e9: {Name:mk0347b1ae25febf5b63847a7ddfd2a05199f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.828798   75727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9 ...
	I0311 21:54:01.828812   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9: {Name:mkde9c302a709830ac1b06e65a9cb8dbe9e198a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.828878   75727 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt.da5ea2e9 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt
	I0311 21:54:01.828959   75727 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key
	I0311 21:54:01.829022   75727 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key
	I0311 21:54:01.829037   75727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt with IP's: []
	I0311 21:54:01.931462   75727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt ...
	I0311 21:54:01.931490   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt: {Name:mkae49738e3b70f7be593c3b9fce3c08854baf14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.931655   75727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key ...
	I0311 21:54:01.931674   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key: {Name:mk96bdf422ac9f796c11a3a971f7b0b8e448149b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.931885   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:54:01.931937   75727 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:54:01.931951   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:54:01.931990   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:54:01.932021   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:54:01.932054   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:54:01.932102   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:54:01.932701   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:54:01.962480   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:54:01.990778   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:54:02.019990   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:54:02.046427   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:54:02.074892   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:54:02.104767   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:54:02.134391   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:54:02.166867   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:54:02.194514   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:54:02.225355   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:54:02.253290   75727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:54:02.275178   75727 ssh_runner.go:195] Run: openssl version
	I0311 21:54:02.281835   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:54:02.296517   75727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:54:02.301671   75727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:54:02.301728   75727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:54:02.308307   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:54:02.321766   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:54:02.334830   75727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:54:02.339787   75727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:54:02.339838   75727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:54:02.346037   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:54:02.361915   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:54:02.375852   75727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:54:02.381433   75727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:54:02.381483   75727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:54:02.388334   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:54:02.402826   75727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:54:02.407714   75727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 21:54:02.407779   75727 kubeadm.go:391] StartCluster: {Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:54:02.407875   75727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:54:02.407945   75727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:54:02.455491   75727 cri.go:89] found id: ""
	I0311 21:54:02.455590   75727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 21:54:02.468270   75727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:54:02.480203   75727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:54:02.492223   75727 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:54:02.492241   75727 kubeadm.go:156] found existing configuration files:
	
	I0311 21:54:02.492296   75727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:54:02.503710   75727 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:54:02.503784   75727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:54:02.516510   75727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:54:02.527862   75727 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:54:02.527924   75727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:54:02.539625   75727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:54:02.550194   75727 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:54:02.550245   75727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:54:02.561657   75727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:54:02.575026   75727 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:54:02.575092   75727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:54:02.587271   75727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:54:02.722108   75727 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0311 21:54:02.722162   75727 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:54:02.873257   75727 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:54:02.873404   75727 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:54:02.873511   75727 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:54:03.156608   75727 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:54:03.193733   75727 out.go:204]   - Generating certificates and keys ...
	I0311 21:54:03.193865   75727 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:54:03.193960   75727 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:54:03.568463   75727 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 21:54:03.760913   75727 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 21:54:03.919423   75727 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 21:54:04.078749   75727 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 21:54:04.356013   75727 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 21:54:04.356582   75727 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-649653] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0311 21:54:04.471562   75727 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 21:54:04.471775   75727 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-649653] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0311 21:54:04.593524   75727 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 21:54:04.731682   75727 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 21:54:04.801313   75727 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 21:54:04.801592   75727 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:54:05.357683   75727 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:54:05.611334   75727 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0311 21:54:05.664186   75727 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:54:05.810194   75727 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:54:06.127505   75727 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:54:06.128316   75727 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:54:06.131379   75727 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:54:06.133066   75727 out.go:204]   - Booting up control plane ...
	I0311 21:54:06.133183   75727 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:54:06.133304   75727 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:54:06.133375   75727 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:54:06.154341   75727 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:54:06.154459   75727 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:54:06.154521   75727 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:54:06.306315   75727 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:54:12.807708   75727 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503824 seconds
	I0311 21:54:12.827358   75727 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:54:12.852154   75727 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:54:13.391122   75727 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:54:13.391378   75727 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-649653 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:54:13.905871   75727 kubeadm.go:309] [bootstrap-token] Using token: tpk8d0.p1x67f7vtd5pwmvt
	I0311 21:54:13.907328   75727 out.go:204]   - Configuring RBAC rules ...
	I0311 21:54:13.907485   75727 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:54:13.916426   75727 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:54:13.928858   75727 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:54:13.933891   75727 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:54:13.938157   75727 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:54:13.948917   75727 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:54:13.965910   75727 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:54:14.274241   75727 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:54:14.341311   75727 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:54:14.344492   75727 kubeadm.go:309] 
	I0311 21:54:14.344590   75727 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:54:14.344613   75727 kubeadm.go:309] 
	I0311 21:54:14.344710   75727 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:54:14.344722   75727 kubeadm.go:309] 
	I0311 21:54:14.344780   75727 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:54:14.344899   75727 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:54:14.344983   75727 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:54:14.344997   75727 kubeadm.go:309] 
	I0311 21:54:14.345096   75727 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:54:14.345115   75727 kubeadm.go:309] 
	I0311 21:54:14.345187   75727 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:54:14.345201   75727 kubeadm.go:309] 
	I0311 21:54:14.345260   75727 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:54:14.345384   75727 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:54:14.345486   75727 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:54:14.345499   75727 kubeadm.go:309] 
	I0311 21:54:14.345616   75727 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:54:14.345720   75727 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:54:14.345730   75727 kubeadm.go:309] 
	I0311 21:54:14.345847   75727 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token tpk8d0.p1x67f7vtd5pwmvt \
	I0311 21:54:14.345980   75727 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:54:14.346011   75727 kubeadm.go:309] 	--control-plane 
	I0311 21:54:14.346021   75727 kubeadm.go:309] 
	I0311 21:54:14.346127   75727 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:54:14.346155   75727 kubeadm.go:309] 
	I0311 21:54:14.346246   75727 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token tpk8d0.p1x67f7vtd5pwmvt \
	I0311 21:54:14.346363   75727 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:54:14.346489   75727 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:54:14.346515   75727 cni.go:84] Creating CNI manager for ""
	I0311 21:54:14.346524   75727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:54:14.348328   75727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:54:14.349881   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:54:14.382261   75727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:54:14.423980   75727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:54:14.424072   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:14.424087   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-649653 minikube.k8s.io/updated_at=2024_03_11T21_54_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=newest-cni-649653 minikube.k8s.io/primary=true
	I0311 21:54:14.503304   75727 ops.go:34] apiserver oom_adj: -16
	I0311 21:54:14.796782   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:15.297203   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:15.797485   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:16.297560   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:16.797305   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:17.297791   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:17.797499   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:18.297505   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:18.797458   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:19.297502   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:19.796820   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.515982336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74cac54f-566c-406c-864d-80d270810631 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.518467409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c6b529b-7a16-4be1-b7bc-95ddef068046 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.519315147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194062519287836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c6b529b-7a16-4be1-b7bc-95ddef068046 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.520125009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c77a8be-67c3-4de2-a315-dc7fde90f69e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.520208021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c77a8be-67c3-4de2-a315-dc7fde90f69e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.520407868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710192900024670344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0601a54c86517ac45bde833e5034231ad39b0a781d319e3c7a96461a91a5407a,PodSandboxId:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710192877991276879,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 82395f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371,PodSandboxId:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710192876858908409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 26f79f4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db,PodSandboxId:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710192869284267856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bd
c5-4dea57847900,},Annotations:map[string]string{io.kubernetes.container.hash: ff981d25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710192869223965401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a
6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a,PodSandboxId:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710192864589640678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75fc9de8ead6fb204,},Annotations:map[string]string{io.kuber
netes.container.hash: d7d87a8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0,PodSandboxId:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710192864592952670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c,PodSandboxId:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710192864521508756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902,PodSandboxId:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710192864494375201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 401348b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c77a8be-67c3-4de2-a315-dc7fde90f69e name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.570057008Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a194819-23af-4b54-80e5-21312ae4c7c7 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.570200668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a194819-23af-4b54-80e5-21312ae4c7c7 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.571354097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53571072-992d-49cd-ac61-aefebcd1029d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.572022995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194062571992554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53571072-992d-49cd-ac61-aefebcd1029d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.572884307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59d7e888-02da-43d0-bb2c-f128ef0548f3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.572967903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59d7e888-02da-43d0-bb2c-f128ef0548f3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.573179736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710192900024670344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0601a54c86517ac45bde833e5034231ad39b0a781d319e3c7a96461a91a5407a,PodSandboxId:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710192877991276879,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 82395f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371,PodSandboxId:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710192876858908409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 26f79f4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db,PodSandboxId:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710192869284267856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bd
c5-4dea57847900,},Annotations:map[string]string{io.kubernetes.container.hash: ff981d25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710192869223965401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a
6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a,PodSandboxId:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710192864589640678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75fc9de8ead6fb204,},Annotations:map[string]string{io.kuber
netes.container.hash: d7d87a8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0,PodSandboxId:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710192864592952670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c,PodSandboxId:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710192864521508756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902,PodSandboxId:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710192864494375201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 401348b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59d7e888-02da-43d0-bb2c-f128ef0548f3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.619015671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2178cedd-7789-4c75-9e56-02fe7ac40a95 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.619111935Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2178cedd-7789-4c75-9e56-02fe7ac40a95 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.620666285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04823eeb-e1e1-4219-94c4-2f6969d9efed name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.621527903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194062621502388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04823eeb-e1e1-4219-94c4-2f6969d9efed name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.622161075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d76d366b-d0c5-4ee8-ab7c-bfa90ba3c791 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.622212206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d76d366b-d0c5-4ee8-ab7c-bfa90ba3c791 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.622408406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710192900024670344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0601a54c86517ac45bde833e5034231ad39b0a781d319e3c7a96461a91a5407a,PodSandboxId:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710192877991276879,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 82395f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371,PodSandboxId:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710192876858908409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 26f79f4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db,PodSandboxId:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710192869284267856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bd
c5-4dea57847900,},Annotations:map[string]string{io.kubernetes.container.hash: ff981d25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710192869223965401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a
6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a,PodSandboxId:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710192864589640678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75fc9de8ead6fb204,},Annotations:map[string]string{io.kuber
netes.container.hash: d7d87a8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0,PodSandboxId:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710192864592952670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c,PodSandboxId:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710192864521508756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902,PodSandboxId:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710192864494375201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 401348b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d76d366b-d0c5-4ee8-ab7c-bfa90ba3c791 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.625141202Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=793d84db-3df3-4b0c-9ba9-4da7a2dd284f name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.625470281Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&PodSandboxMetadata{Name:busybox,Uid:f0775042-3ac4-4743-a85a-3df42267a6e6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710192876637037365,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:34:28.696997481Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-s6lsb,Uid:b4f5daf9-7d52-475d-9341-09024dc7c8e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17101928766073397
24,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:34:28.697012223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f65bf7004af536123ea4cf0053082dfe8e5417cace16046a0f3dab142eda221,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-nv4gd,Uid:ae810c51-28bd-4c79-93ba-033f4767ba89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710192873807857330,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-nv4gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae810c51-28bd-4c79-93ba-033f4767ba89,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:34:28.6
97025889Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&PodSandboxMetadata{Name:kube-proxy-rmz4b,Uid:81ec7a47-6b52-4133-bdc5-4dea57847900,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710192869021908035,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bdc5-4dea57847900,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:34:28.697023918Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:82fcc747-2962-4203-8ce5-25c2bb408a6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710192869020459247,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-03-11T21:34:28.697019834Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-324578,Uid:fdcc8e32375fbc3cf5ca65346b1457dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710192864247520170,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fdcc8e32375fbc3cf5ca65346b1457dd,kubernetes.io/config.seen: 2024-03-11T21:34:23.696104642Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-324578,Uid:c07206bcb9cdf44cefceebaa6e0ed3a3,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710192864241865697,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c07206bcb9cdf44cefceebaa6e0ed3a3,kubernetes.io/config.seen: 2024-03-11T21:34:23.696103463Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-324578,Uid:9c01883a8f967cb75fc9de8ead6fb204,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710192864223668110,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75
fc9de8ead6fb204,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.36:2379,kubernetes.io/config.hash: 9c01883a8f967cb75fc9de8ead6fb204,kubernetes.io/config.seen: 2024-03-11T21:34:23.749650027Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-324578,Uid:816bd9883830036b8fe6a241a004950c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710192864223247343,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.36:8443,kubernetes.io/config.hash: 816bd9883830036b8fe6a241a004950c,kube
rnetes.io/config.seen: 2024-03-11T21:34:23.696099385Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=793d84db-3df3-4b0c-9ba9-4da7a2dd284f name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.626444461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3963956e-7592-468d-bbd9-635479ab9935 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.626497058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3963956e-7592-468d-bbd9-635479ab9935 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:22 no-preload-324578 crio[688]: time="2024-03-11 21:54:22.626823655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710192900024670344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0601a54c86517ac45bde833e5034231ad39b0a781d319e3c7a96461a91a5407a,PodSandboxId:00f9c2c2c24a2d9a25455389cd7c53b91abe2677788341170c4e909e31c01592,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710192877991276879,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f0775042-3ac4-4743-a85a-3df42267a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 82395f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371,PodSandboxId:17a6c558fdd05884e68588b4227687f72cdab56eaa9b47177121cc35d6f3e2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710192876858908409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-s6lsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4f5daf9-7d52-475d-9341-09024dc7c8e7,},Annotations:map[string]string{io.kubernetes.container.hash: 26f79f4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db,PodSandboxId:6c311e64040daf112fa8999c99f3eaf422700c1b3814a57dd5cefb9dc1dc65de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710192869284267856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmz4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81ec7a47-6b52-4133-bd
c5-4dea57847900,},Annotations:map[string]string{io.kubernetes.container.hash: ff981d25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001,PodSandboxId:98e0753deae414f93734b80ff1636b242772441ebf66cfa5befca2878c689cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710192869223965401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82fcc747-2962-4203-8ce5-25c2bb408a
6d,},Annotations:map[string]string{io.kubernetes.container.hash: a5594de6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a,PodSandboxId:ab96f9a415c1d01675fe726ae2e6c8a87e3c75918be79e00f89da171121192e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710192864589640678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c01883a8f967cb75fc9de8ead6fb204,},Annotations:map[string]string{io.kuber
netes.container.hash: d7d87a8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0,PodSandboxId:fc676152297873cfd00ddd04200a063d29b282a0422dc556611400639a99b119,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710192864592952670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdcc8e32375fbc3cf5ca65346b1457dd,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c,PodSandboxId:9660842d3b13ad4a8355982e8c4d811b1b5506a638f011bd6a00609a29dd3377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710192864521508756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c07206bcb9cdf44cefceebaa6e0ed3a3,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902,PodSandboxId:36c029e61ceaa7ebfe4083e2f05f06c74b54b4f9481478d5a9ba0e5296e60270,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710192864494375201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-324578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816bd9883830036b8fe6a241a004950c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 401348b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3963956e-7592-468d-bbd9-635479ab9935 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	21d8b522dbe03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       3                   98e0753deae41       storage-provisioner
	0601a54c86517       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   00f9c2c2c24a2       busybox
	47a3cc73ba85a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   17a6c558fdd05       coredns-76f75df574-s6lsb
	c4b1f09c4c07d       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      19 minutes ago      Running             kube-proxy                1                   6c311e64040da       kube-proxy-rmz4b
	8c5aec8c42b97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   98e0753deae41       storage-provisioner
	afcbb2dc1ded0       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      19 minutes ago      Running             kube-scheduler            1                   fc67615229787       kube-scheduler-no-preload-324578
	c0cb4bf3e770c       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      19 minutes ago      Running             etcd                      1                   ab96f9a415c1d       etcd-no-preload-324578
	349dc13986ab3       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      19 minutes ago      Running             kube-controller-manager   1                   9660842d3b13a       kube-controller-manager-no-preload-324578
	1ed4ff4bec8a1       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      19 minutes ago      Running             kube-apiserver            1                   36c029e61ceaa       kube-apiserver-no-preload-324578
	
	
	==> coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36589 - 27227 "HINFO IN 7298603871246463141.566043023039465393. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006403542s
	
	
	==> describe nodes <==
	Name:               no-preload-324578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-324578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=no-preload-324578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T21_25_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:25:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-324578
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:54:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:50:19 +0000   Mon, 11 Mar 2024 21:25:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:50:19 +0000   Mon, 11 Mar 2024 21:25:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:50:19 +0000   Mon, 11 Mar 2024 21:25:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:50:19 +0000   Mon, 11 Mar 2024 21:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    no-preload-324578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb451091906a45f09624844ec4bffca5
	  System UUID:                eb451091-906a-45f0-9624-844ec4bffca5
	  Boot ID:                    4581dfec-8b49-4d5c-ae2b-764bbaa7967c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-76f75df574-s6lsb                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-324578                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-324578             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-324578    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-rmz4b                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-324578             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-nv4gd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-324578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-324578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-324578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-324578 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-324578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-324578 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-324578 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-324578 event: Registered Node no-preload-324578 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-324578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-324578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-324578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-324578 event: Registered Node no-preload-324578 in Controller
	
	
	==> dmesg <==
	[Mar11 21:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053580] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045063] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.537957] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.354207] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.698590] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar11 21:34] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.056190] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069135] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.216244] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.115661] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.252298] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[ +17.048095] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.058759] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.751935] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +5.632867] kauditd_printk_skb: 100 callbacks suppressed
	[  +4.553775] systemd-fstab-generator[1925]: Ignoring "noauto" option for root device
	[  +2.955658] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.901502] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] <==
	{"level":"info","ts":"2024-03-11T21:35:17.475199Z","caller":"traceutil/trace.go:171","msg":"trace[1116546657] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd; range_end:; response_count:1; response_revision:654; }","duration":"468.394037ms","start":"2024-03-11T21:35:17.006795Z","end":"2024-03-11T21:35:17.475189Z","steps":["trace[1116546657] 'agreement among raft nodes before linearized reading'  (duration: 468.311485ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:17.475238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.006781Z","time spent":"468.44842ms","remote":"127.0.0.1:51246","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4258,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd\" "}
	{"level":"warn","ts":"2024-03-11T21:35:17.734615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.585935ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2618718042736031601 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d\" mod_revision:633 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d\" value_size:830 lease:2618718042736031120 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c068d\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-11T21:35:17.734831Z","caller":"traceutil/trace.go:171","msg":"trace[430309502] linearizableReadLoop","detail":"{readStateIndex:708; appliedIndex:707; }","duration":"250.615585ms","start":"2024-03-11T21:35:17.4842Z","end":"2024-03-11T21:35:17.734816Z","steps":["trace[430309502] 'read index received'  (duration: 120.717511ms)","trace[430309502] 'applied index is now lower than readState.Index'  (duration: 129.896463ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-11T21:35:17.734908Z","caller":"traceutil/trace.go:171","msg":"trace[1428849145] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"253.823327ms","start":"2024-03-11T21:35:17.481075Z","end":"2024-03-11T21:35:17.734899Z","steps":["trace[1428849145] 'process raft request'  (duration: 123.884872ms)","trace[1428849145] 'compare'  (duration: 129.427399ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T21:35:17.735139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.944768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-324578\" ","response":"range_response_count:1 size:4692"}
	{"level":"info","ts":"2024-03-11T21:35:17.735195Z","caller":"traceutil/trace.go:171","msg":"trace[1597271578] range","detail":"{range_begin:/registry/minions/no-preload-324578; range_end:; response_count:1; response_revision:655; }","duration":"251.005454ms","start":"2024-03-11T21:35:17.484182Z","end":"2024-03-11T21:35:17.735187Z","steps":["trace[1597271578] 'agreement among raft nodes before linearized reading'  (duration: 250.884018ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:17.735334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.42753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-03-11T21:35:17.735383Z","caller":"traceutil/trace.go:171","msg":"trace[2015197557] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:655; }","duration":"177.477099ms","start":"2024-03-11T21:35:17.557899Z","end":"2024-03-11T21:35:17.735376Z","steps":["trace[2015197557] 'agreement among raft nodes before linearized reading'  (duration: 177.405929ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:18.046145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.618387ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2618718042736031606 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" mod_revision:634 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" value_size:668 lease:2618718042736031120 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-11T21:35:18.046452Z","caller":"traceutil/trace.go:171","msg":"trace[797991262] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"301.981313ms","start":"2024-03-11T21:35:17.74446Z","end":"2024-03-11T21:35:18.046441Z","steps":["trace[797991262] 'process raft request'  (duration: 301.912171ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:18.046558Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.744448Z","time spent":"302.073682ms","remote":"127.0.0.1:51226","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:484 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-11T21:35:18.046565Z","caller":"traceutil/trace.go:171","msg":"trace[2100089755] linearizableReadLoop","detail":"{readStateIndex:709; appliedIndex:708; }","duration":"306.087084ms","start":"2024-03-11T21:35:17.740465Z","end":"2024-03-11T21:35:18.046552Z","steps":["trace[2100089755] 'read index received'  (duration: 119.933961ms)","trace[2100089755] 'applied index is now lower than readState.Index'  (duration: 186.151718ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T21:35:18.046773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.31703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-03-11T21:35:18.046824Z","caller":"traceutil/trace.go:171","msg":"trace[377667928] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd; range_end:; response_count:1; response_revision:657; }","duration":"306.37314ms","start":"2024-03-11T21:35:17.740443Z","end":"2024-03-11T21:35:18.046817Z","steps":["trace[377667928] 'agreement among raft nodes before linearized reading'  (duration: 306.203485ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:35:18.046846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.740433Z","time spent":"306.40709ms","remote":"127.0.0.1:51246","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4258,"request content":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-nv4gd\" "}
	{"level":"info","ts":"2024-03-11T21:35:18.046989Z","caller":"traceutil/trace.go:171","msg":"trace[1197202910] transaction","detail":"{read_only:false; response_revision:656; number_of_response:1; }","duration":"306.653787ms","start":"2024-03-11T21:35:17.740325Z","end":"2024-03-11T21:35:18.046979Z","steps":["trace[1197202910] 'process raft request'  (duration: 120.067487ms)","trace[1197202910] 'compare'  (duration: 185.413109ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T21:35:18.047067Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-11T21:35:17.740308Z","time spent":"306.724341ms","remote":"127.0.0.1:51136","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":763,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" mod_revision:634 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" value_size:668 lease:2618718042736031120 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-nv4gd.17bbd35baa4c97fe\" > >"}
	{"level":"info","ts":"2024-03-11T21:44:26.100335Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":874}
	{"level":"info","ts":"2024-03-11T21:44:26.103328Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":874,"took":"2.544833ms","hash":1178115223}
	{"level":"info","ts":"2024-03-11T21:44:26.103387Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1178115223,"revision":874,"compact-revision":-1}
	{"level":"info","ts":"2024-03-11T21:49:26.109389Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1116}
	{"level":"info","ts":"2024-03-11T21:49:26.111684Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1116,"took":"1.613346ms","hash":3223441971}
	{"level":"info","ts":"2024-03-11T21:49:26.111814Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3223441971,"revision":1116,"compact-revision":874}
	{"level":"info","ts":"2024-03-11T21:54:02.379036Z","caller":"traceutil/trace.go:171","msg":"trace[455843733] transaction","detail":"{read_only:false; response_revision:1583; number_of_response:1; }","duration":"188.435007ms","start":"2024-03-11T21:54:02.190504Z","end":"2024-03-11T21:54:02.378939Z","steps":["trace[455843733] 'process raft request'  (duration: 188.292854ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:54:23 up 20 min,  0 users,  load average: 0.32, 0.23, 0.14
	Linux no-preload-324578 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] <==
	I0311 21:47:29.269817       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:49:28.272078       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:49:28.272206       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0311 21:49:29.272985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:49:29.273063       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:49:29.273072       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:49:29.272996       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:49:29.273138       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:49:29.274577       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:50:29.273257       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:50:29.273460       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:50:29.273496       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:50:29.275866       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:50:29.275915       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:50:29.275957       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:52:29.274110       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:52:29.274379       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:52:29.274414       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:52:29.276496       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:52:29.276607       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:52:29.276643       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] <==
	I0311 21:48:43.338225       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:49:12.811578       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:49:13.347890       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:49:42.816948       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:49:43.356584       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:50:12.822625       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:50:13.366422       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:50:42.810148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="263.155µs"
	E0311 21:50:42.827354       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:50:43.374570       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:50:56.812003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="113.361µs"
	E0311 21:51:12.833646       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:51:13.384550       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:51:42.839096       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:51:43.392971       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:52:12.845409       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:52:13.403210       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:52:42.851512       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:52:43.411133       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:53:12.858939       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:53:13.419358       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:53:42.866085       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:53:43.429181       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:54:12.872270       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:54:13.439684       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] <==
	I0311 21:34:29.638221       1 server_others.go:72] "Using iptables proxy"
	I0311 21:34:29.650503       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	I0311 21:34:29.704068       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0311 21:34:29.704129       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:34:29.704155       1 server_others.go:168] "Using iptables Proxier"
	I0311 21:34:29.707921       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:34:29.708391       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0311 21:34:29.708440       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:34:29.709589       1 config.go:188] "Starting service config controller"
	I0311 21:34:29.709659       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:34:29.709683       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:34:29.709847       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:34:29.710031       1 config.go:315] "Starting node config controller"
	I0311 21:34:29.710061       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:34:29.809855       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:34:29.811050       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 21:34:29.811241       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] <==
	I0311 21:34:26.127922       1 serving.go:380] Generated self-signed cert in-memory
	W0311 21:34:28.232805       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 21:34:28.232921       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:34:28.232934       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 21:34:28.232941       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 21:34:28.298546       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0311 21:34:28.298649       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:34:28.300761       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 21:34:28.301037       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 21:34:28.301278       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 21:34:28.301467       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 21:34:28.402098       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:51:23 no-preload-324578 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:51:23 no-preload-324578 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:51:37 no-preload-324578 kubelet[1315]: E0311 21:51:37.796140    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:51:52 no-preload-324578 kubelet[1315]: E0311 21:51:52.795208    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:52:05 no-preload-324578 kubelet[1315]: E0311 21:52:05.795230    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:52:20 no-preload-324578 kubelet[1315]: E0311 21:52:20.795113    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:52:23 no-preload-324578 kubelet[1315]: E0311 21:52:23.811424    1315 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:52:23 no-preload-324578 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:52:23 no-preload-324578 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:52:23 no-preload-324578 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:52:23 no-preload-324578 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:52:31 no-preload-324578 kubelet[1315]: E0311 21:52:31.796207    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:52:42 no-preload-324578 kubelet[1315]: E0311 21:52:42.794426    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:52:56 no-preload-324578 kubelet[1315]: E0311 21:52:56.794303    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:53:07 no-preload-324578 kubelet[1315]: E0311 21:53:07.795160    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:53:22 no-preload-324578 kubelet[1315]: E0311 21:53:22.794819    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:53:23 no-preload-324578 kubelet[1315]: E0311 21:53:23.809385    1315 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:53:23 no-preload-324578 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:53:23 no-preload-324578 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:53:23 no-preload-324578 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:53:23 no-preload-324578 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:53:37 no-preload-324578 kubelet[1315]: E0311 21:53:37.795218    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:53:52 no-preload-324578 kubelet[1315]: E0311 21:53:52.794640    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:54:03 no-preload-324578 kubelet[1315]: E0311 21:54:03.797140    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	Mar 11 21:54:14 no-preload-324578 kubelet[1315]: E0311 21:54:14.798524    1315 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nv4gd" podUID="ae810c51-28bd-4c79-93ba-033f4767ba89"
	
	
	==> storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] <==
	I0311 21:35:00.142049       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 21:35:00.162424       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 21:35:00.162601       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 21:35:18.053813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 21:35:18.054343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88144734-da96-462d-b463-5b878079ac26", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-324578_4f8c71c4-91e4-4eb5-b31f-b50cae83aac9 became leader
	I0311 21:35:18.055390       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-324578_4f8c71c4-91e4-4eb5-b31f-b50cae83aac9!
	I0311 21:35:18.157284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-324578_4f8c71c4-91e4-4eb5-b31f-b50cae83aac9!
	
	
	==> storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] <==
	I0311 21:34:29.528325       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0311 21:34:59.531684       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-324578 -n no-preload-324578
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-324578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nv4gd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-324578 describe pod metrics-server-57f55c9bc5-nv4gd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-324578 describe pod metrics-server-57f55c9bc5-nv4gd: exit status 1 (78.080888ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nv4gd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-324578 describe pod metrics-server-57f55c9bc5-nv4gd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (383.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (334.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-743937 -n embed-certs-743937
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-11 21:54:29.12445197 +0000 UTC m=+6277.296126283
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-743937 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-743937 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.318µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-743937 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-743937 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-743937 logs -n 25: (1.401639058s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-427678 sudo find                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo crio                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-427678                                       | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-124446 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | disable-driver-mounts-124446                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:26 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-766430  | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-324578             | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:53 UTC | 11 Mar 24 21:53 UTC |
	| start   | -p newest-cni-649653 --memory=2200 --alsologtostderr   | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:53 UTC | 11 Mar 24 21:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:54 UTC | 11 Mar 24 21:54 UTC |
	| addons  | enable metrics-server -p newest-cni-649653             | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:54 UTC | 11 Mar 24 21:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:53:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:53:29.936719   75727 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:53:29.936864   75727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:53:29.936877   75727 out.go:304] Setting ErrFile to fd 2...
	I0311 21:53:29.936883   75727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:53:29.937117   75727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:53:29.937767   75727 out.go:298] Setting JSON to false
	I0311 21:53:29.938704   75727 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9359,"bootTime":1710184651,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:53:29.938760   75727 start.go:139] virtualization: kvm guest
	I0311 21:53:29.941562   75727 out.go:177] * [newest-cni-649653] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:53:29.943397   75727 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:53:29.943339   75727 notify.go:220] Checking for updates...
	I0311 21:53:29.946238   75727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:53:29.947621   75727 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:53:29.948958   75727 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:53:29.950257   75727 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:53:29.951747   75727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:53:29.953649   75727 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:53:29.953801   75727 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:53:29.953953   75727 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:53:29.954064   75727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:53:29.992804   75727 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 21:53:29.994030   75727 start.go:297] selected driver: kvm2
	I0311 21:53:29.994050   75727 start.go:901] validating driver "kvm2" against <nil>
	I0311 21:53:29.994061   75727 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:53:29.994759   75727 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:53:29.994826   75727 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:53:30.011256   75727 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:53:30.011317   75727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0311 21:53:30.011348   75727 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0311 21:53:30.011558   75727 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 21:53:30.011586   75727 cni.go:84] Creating CNI manager for ""
	I0311 21:53:30.011593   75727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:53:30.011599   75727 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 21:53:30.011672   75727 start.go:340] cluster config:
	{Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:53:30.011763   75727 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:53:30.013376   75727 out.go:177] * Starting "newest-cni-649653" primary control-plane node in "newest-cni-649653" cluster
	I0311 21:53:30.014694   75727 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:53:30.014724   75727 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0311 21:53:30.014731   75727 cache.go:56] Caching tarball of preloaded images
	I0311 21:53:30.014827   75727 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:53:30.014840   75727 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0311 21:53:30.014948   75727 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/config.json ...
	I0311 21:53:30.014966   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/config.json: {Name:mk51ceabf4fcf900816338d68a850020f60e97dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:53:30.015094   75727 start.go:360] acquireMachinesLock for newest-cni-649653: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:53:30.015120   75727 start.go:364] duration metric: took 14.071µs to acquireMachinesLock for "newest-cni-649653"
	I0311 21:53:30.015136   75727 start.go:93] Provisioning new machine with config: &{Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:53:30.015233   75727 start.go:125] createHost starting for "" (driver="kvm2")
	I0311 21:53:30.016995   75727 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 21:53:30.017159   75727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:53:30.017212   75727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:53:30.031426   75727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0311 21:53:30.031855   75727 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:53:30.032477   75727 main.go:141] libmachine: Using API Version  1
	I0311 21:53:30.032501   75727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:53:30.032861   75727 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:53:30.033071   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:53:30.033240   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:30.033407   75727 start.go:159] libmachine.API.Create for "newest-cni-649653" (driver="kvm2")
	I0311 21:53:30.033436   75727 client.go:168] LocalClient.Create starting
	I0311 21:53:30.033472   75727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem
	I0311 21:53:30.033506   75727 main.go:141] libmachine: Decoding PEM data...
	I0311 21:53:30.033530   75727 main.go:141] libmachine: Parsing certificate...
	I0311 21:53:30.033598   75727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem
	I0311 21:53:30.033642   75727 main.go:141] libmachine: Decoding PEM data...
	I0311 21:53:30.033659   75727 main.go:141] libmachine: Parsing certificate...
	I0311 21:53:30.033682   75727 main.go:141] libmachine: Running pre-create checks...
	I0311 21:53:30.033707   75727 main.go:141] libmachine: (newest-cni-649653) Calling .PreCreateCheck
	I0311 21:53:30.034058   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetConfigRaw
	I0311 21:53:30.034703   75727 main.go:141] libmachine: Creating machine...
	I0311 21:53:30.034741   75727 main.go:141] libmachine: (newest-cni-649653) Calling .Create
	I0311 21:53:30.034960   75727 main.go:141] libmachine: (newest-cni-649653) Creating KVM machine...
	I0311 21:53:30.037113   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found existing default KVM network
	I0311 21:53:30.038315   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.038151   75749 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8c:65:64} reservation:<nil>}
	I0311 21:53:30.039336   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.039197   75749 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:2b:c4} reservation:<nil>}
	I0311 21:53:30.040105   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.040023   75749 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:c8:e3} reservation:<nil>}
	I0311 21:53:30.041171   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.041092   75749 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289890}
	I0311 21:53:30.041200   75727 main.go:141] libmachine: (newest-cni-649653) DBG | created network xml: 
	I0311 21:53:30.041213   75727 main.go:141] libmachine: (newest-cni-649653) DBG | <network>
	I0311 21:53:30.041231   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   <name>mk-newest-cni-649653</name>
	I0311 21:53:30.041255   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   <dns enable='no'/>
	I0311 21:53:30.041279   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   
	I0311 21:53:30.041290   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0311 21:53:30.041301   75727 main.go:141] libmachine: (newest-cni-649653) DBG |     <dhcp>
	I0311 21:53:30.041358   75727 main.go:141] libmachine: (newest-cni-649653) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0311 21:53:30.041382   75727 main.go:141] libmachine: (newest-cni-649653) DBG |     </dhcp>
	I0311 21:53:30.041405   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   </ip>
	I0311 21:53:30.041415   75727 main.go:141] libmachine: (newest-cni-649653) DBG |   
	I0311 21:53:30.041423   75727 main.go:141] libmachine: (newest-cni-649653) DBG | </network>
	I0311 21:53:30.041433   75727 main.go:141] libmachine: (newest-cni-649653) DBG | 
	I0311 21:53:30.046411   75727 main.go:141] libmachine: (newest-cni-649653) DBG | trying to create private KVM network mk-newest-cni-649653 192.168.72.0/24...
	I0311 21:53:30.118483   75727 main.go:141] libmachine: (newest-cni-649653) DBG | private KVM network mk-newest-cni-649653 192.168.72.0/24 created
	I0311 21:53:30.118580   75727 main.go:141] libmachine: (newest-cni-649653) Setting up store path in /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653 ...
	I0311 21:53:30.118670   75727 main.go:141] libmachine: (newest-cni-649653) Building disk image from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 21:53:30.118699   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.118631   75749 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:53:30.118796   75727 main.go:141] libmachine: (newest-cni-649653) Downloading /home/jenkins/minikube-integration/18358-11004/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0311 21:53:30.368677   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.368544   75749 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa...
	I0311 21:53:30.423818   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.423705   75749 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/newest-cni-649653.rawdisk...
	I0311 21:53:30.423863   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Writing magic tar header
	I0311 21:53:30.423883   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Writing SSH key tar header
	I0311 21:53:30.423949   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:30.423885   75749 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653 ...
	I0311 21:53:30.424051   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653
	I0311 21:53:30.424074   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube/machines
	I0311 21:53:30.424089   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653 (perms=drwx------)
	I0311 21:53:30.424141   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:53:30.424168   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18358-11004
	I0311 21:53:30.424185   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube/machines (perms=drwxr-xr-x)
	I0311 21:53:30.424201   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004/.minikube (perms=drwxr-xr-x)
	I0311 21:53:30.424214   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration/18358-11004 (perms=drwxrwxr-x)
	I0311 21:53:30.424231   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0311 21:53:30.424242   75727 main.go:141] libmachine: (newest-cni-649653) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0311 21:53:30.424250   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0311 21:53:30.424261   75727 main.go:141] libmachine: (newest-cni-649653) Creating domain...
	I0311 21:53:30.424274   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home/jenkins
	I0311 21:53:30.424284   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Checking permissions on dir: /home
	I0311 21:53:30.424294   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Skipping /home - not owner
	I0311 21:53:30.425336   75727 main.go:141] libmachine: (newest-cni-649653) define libvirt domain using xml: 
	I0311 21:53:30.425361   75727 main.go:141] libmachine: (newest-cni-649653) <domain type='kvm'>
	I0311 21:53:30.425388   75727 main.go:141] libmachine: (newest-cni-649653)   <name>newest-cni-649653</name>
	I0311 21:53:30.425422   75727 main.go:141] libmachine: (newest-cni-649653)   <memory unit='MiB'>2200</memory>
	I0311 21:53:30.425435   75727 main.go:141] libmachine: (newest-cni-649653)   <vcpu>2</vcpu>
	I0311 21:53:30.425445   75727 main.go:141] libmachine: (newest-cni-649653)   <features>
	I0311 21:53:30.425457   75727 main.go:141] libmachine: (newest-cni-649653)     <acpi/>
	I0311 21:53:30.425469   75727 main.go:141] libmachine: (newest-cni-649653)     <apic/>
	I0311 21:53:30.425477   75727 main.go:141] libmachine: (newest-cni-649653)     <pae/>
	I0311 21:53:30.425490   75727 main.go:141] libmachine: (newest-cni-649653)     
	I0311 21:53:30.425502   75727 main.go:141] libmachine: (newest-cni-649653)   </features>
	I0311 21:53:30.425511   75727 main.go:141] libmachine: (newest-cni-649653)   <cpu mode='host-passthrough'>
	I0311 21:53:30.425522   75727 main.go:141] libmachine: (newest-cni-649653)   
	I0311 21:53:30.425529   75727 main.go:141] libmachine: (newest-cni-649653)   </cpu>
	I0311 21:53:30.425541   75727 main.go:141] libmachine: (newest-cni-649653)   <os>
	I0311 21:53:30.425548   75727 main.go:141] libmachine: (newest-cni-649653)     <type>hvm</type>
	I0311 21:53:30.425578   75727 main.go:141] libmachine: (newest-cni-649653)     <boot dev='cdrom'/>
	I0311 21:53:30.425602   75727 main.go:141] libmachine: (newest-cni-649653)     <boot dev='hd'/>
	I0311 21:53:30.425612   75727 main.go:141] libmachine: (newest-cni-649653)     <bootmenu enable='no'/>
	I0311 21:53:30.425622   75727 main.go:141] libmachine: (newest-cni-649653)   </os>
	I0311 21:53:30.425629   75727 main.go:141] libmachine: (newest-cni-649653)   <devices>
	I0311 21:53:30.425639   75727 main.go:141] libmachine: (newest-cni-649653)     <disk type='file' device='cdrom'>
	I0311 21:53:30.425653   75727 main.go:141] libmachine: (newest-cni-649653)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/boot2docker.iso'/>
	I0311 21:53:30.425678   75727 main.go:141] libmachine: (newest-cni-649653)       <target dev='hdc' bus='scsi'/>
	I0311 21:53:30.425691   75727 main.go:141] libmachine: (newest-cni-649653)       <readonly/>
	I0311 21:53:30.425702   75727 main.go:141] libmachine: (newest-cni-649653)     </disk>
	I0311 21:53:30.425716   75727 main.go:141] libmachine: (newest-cni-649653)     <disk type='file' device='disk'>
	I0311 21:53:30.425729   75727 main.go:141] libmachine: (newest-cni-649653)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0311 21:53:30.425748   75727 main.go:141] libmachine: (newest-cni-649653)       <source file='/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/newest-cni-649653.rawdisk'/>
	I0311 21:53:30.425767   75727 main.go:141] libmachine: (newest-cni-649653)       <target dev='hda' bus='virtio'/>
	I0311 21:53:30.425778   75727 main.go:141] libmachine: (newest-cni-649653)     </disk>
	I0311 21:53:30.425787   75727 main.go:141] libmachine: (newest-cni-649653)     <interface type='network'>
	I0311 21:53:30.425796   75727 main.go:141] libmachine: (newest-cni-649653)       <source network='mk-newest-cni-649653'/>
	I0311 21:53:30.425803   75727 main.go:141] libmachine: (newest-cni-649653)       <model type='virtio'/>
	I0311 21:53:30.425813   75727 main.go:141] libmachine: (newest-cni-649653)     </interface>
	I0311 21:53:30.425821   75727 main.go:141] libmachine: (newest-cni-649653)     <interface type='network'>
	I0311 21:53:30.425834   75727 main.go:141] libmachine: (newest-cni-649653)       <source network='default'/>
	I0311 21:53:30.425849   75727 main.go:141] libmachine: (newest-cni-649653)       <model type='virtio'/>
	I0311 21:53:30.425861   75727 main.go:141] libmachine: (newest-cni-649653)     </interface>
	I0311 21:53:30.425876   75727 main.go:141] libmachine: (newest-cni-649653)     <serial type='pty'>
	I0311 21:53:30.425889   75727 main.go:141] libmachine: (newest-cni-649653)       <target port='0'/>
	I0311 21:53:30.425900   75727 main.go:141] libmachine: (newest-cni-649653)     </serial>
	I0311 21:53:30.425912   75727 main.go:141] libmachine: (newest-cni-649653)     <console type='pty'>
	I0311 21:53:30.425928   75727 main.go:141] libmachine: (newest-cni-649653)       <target type='serial' port='0'/>
	I0311 21:53:30.425940   75727 main.go:141] libmachine: (newest-cni-649653)     </console>
	I0311 21:53:30.425951   75727 main.go:141] libmachine: (newest-cni-649653)     <rng model='virtio'>
	I0311 21:53:30.425964   75727 main.go:141] libmachine: (newest-cni-649653)       <backend model='random'>/dev/random</backend>
	I0311 21:53:30.425971   75727 main.go:141] libmachine: (newest-cni-649653)     </rng>
	I0311 21:53:30.425981   75727 main.go:141] libmachine: (newest-cni-649653)     
	I0311 21:53:30.425997   75727 main.go:141] libmachine: (newest-cni-649653)     
	I0311 21:53:30.426009   75727 main.go:141] libmachine: (newest-cni-649653)   </devices>
	I0311 21:53:30.426019   75727 main.go:141] libmachine: (newest-cni-649653) </domain>
	I0311 21:53:30.426028   75727 main.go:141] libmachine: (newest-cni-649653) 
	I0311 21:53:30.429994   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:dd:5f:e6 in network default
	I0311 21:53:30.430524   75727 main.go:141] libmachine: (newest-cni-649653) Ensuring networks are active...
	I0311 21:53:30.430578   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:30.431155   75727 main.go:141] libmachine: (newest-cni-649653) Ensuring network default is active
	I0311 21:53:30.431449   75727 main.go:141] libmachine: (newest-cni-649653) Ensuring network mk-newest-cni-649653 is active
	I0311 21:53:30.432000   75727 main.go:141] libmachine: (newest-cni-649653) Getting domain xml...
	I0311 21:53:30.432810   75727 main.go:141] libmachine: (newest-cni-649653) Creating domain...
	I0311 21:53:31.672132   75727 main.go:141] libmachine: (newest-cni-649653) Waiting to get IP...
	I0311 21:53:31.672912   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:31.673333   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:31.673354   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:31.673308   75749 retry.go:31] will retry after 191.593411ms: waiting for machine to come up
	I0311 21:53:31.866695   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:31.867244   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:31.867273   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:31.867190   75749 retry.go:31] will retry after 294.601067ms: waiting for machine to come up
	I0311 21:53:32.163613   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:32.164073   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:32.164096   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:32.164032   75749 retry.go:31] will retry after 483.852852ms: waiting for machine to come up
	I0311 21:53:32.649724   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:32.650154   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:32.650177   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:32.650109   75749 retry.go:31] will retry after 544.965754ms: waiting for machine to come up
	I0311 21:53:33.196825   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:33.197376   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:33.197404   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:33.197324   75749 retry.go:31] will retry after 569.467974ms: waiting for machine to come up
	I0311 21:53:33.768068   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:33.768616   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:33.768651   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:33.768568   75749 retry.go:31] will retry after 785.346216ms: waiting for machine to come up
	I0311 21:53:34.555442   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:34.555941   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:34.555970   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:34.555886   75749 retry.go:31] will retry after 1.185792657s: waiting for machine to come up
	I0311 21:53:35.745218   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:35.745759   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:35.745792   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:35.745709   75749 retry.go:31] will retry after 1.045736118s: waiting for machine to come up
	I0311 21:53:36.792624   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:36.793145   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:36.793175   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:36.793084   75749 retry.go:31] will retry after 1.492296791s: waiting for machine to come up
	I0311 21:53:38.286865   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:38.287447   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:38.287477   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:38.287401   75749 retry.go:31] will retry after 1.559903644s: waiting for machine to come up
	I0311 21:53:39.849344   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:39.849874   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:39.849901   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:39.849831   75749 retry.go:31] will retry after 1.851186773s: waiting for machine to come up
	I0311 21:53:41.703721   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:41.704286   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:41.704315   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:41.704256   75749 retry.go:31] will retry after 2.461306109s: waiting for machine to come up
	I0311 21:53:44.167385   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:44.167891   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:44.167914   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:44.167852   75749 retry.go:31] will retry after 3.635340302s: waiting for machine to come up
	I0311 21:53:47.805849   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:47.806340   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:53:47.806354   75727 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:53:47.806314   75749 retry.go:31] will retry after 5.440107138s: waiting for machine to come up
	I0311 21:53:53.247922   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.248455   75727 main.go:141] libmachine: (newest-cni-649653) Found IP for machine: 192.168.72.200
	I0311 21:53:53.248475   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has current primary IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.248481   75727 main.go:141] libmachine: (newest-cni-649653) Reserving static IP address...
	I0311 21:53:53.248956   75727 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find host DHCP lease matching {name: "newest-cni-649653", mac: "52:54:00:de:e6:a4", ip: "192.168.72.200"} in network mk-newest-cni-649653
	I0311 21:53:53.324889   75727 main.go:141] libmachine: (newest-cni-649653) Reserved static IP address: 192.168.72.200
	I0311 21:53:53.324917   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Getting to WaitForSSH function...
	I0311 21:53:53.324925   75727 main.go:141] libmachine: (newest-cni-649653) Waiting for SSH to be available...
	I0311 21:53:53.327808   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.328227   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.328256   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.328380   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Using SSH client type: external
	I0311 21:53:53.328412   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa (-rw-------)
	I0311 21:53:53.328443   75727 main.go:141] libmachine: (newest-cni-649653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:53:53.328465   75727 main.go:141] libmachine: (newest-cni-649653) DBG | About to run SSH command:
	I0311 21:53:53.328482   75727 main.go:141] libmachine: (newest-cni-649653) DBG | exit 0
	I0311 21:53:53.456938   75727 main.go:141] libmachine: (newest-cni-649653) DBG | SSH cmd err, output: <nil>: 
	I0311 21:53:53.457206   75727 main.go:141] libmachine: (newest-cni-649653) KVM machine creation complete!
	I0311 21:53:53.457570   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetConfigRaw
	I0311 21:53:53.458093   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:53.458325   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:53.458511   75727 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0311 21:53:53.458532   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:53:53.459943   75727 main.go:141] libmachine: Detecting operating system of created instance...
	I0311 21:53:53.459956   75727 main.go:141] libmachine: Waiting for SSH to be available...
	I0311 21:53:53.459962   75727 main.go:141] libmachine: Getting to WaitForSSH function...
	I0311 21:53:53.459967   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.462138   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.462556   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.462585   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.462703   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:53.462872   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.463008   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.463150   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:53.463320   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:53.463530   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:53.463545   75727 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0311 21:53:53.576884   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:53:53.576911   75727 main.go:141] libmachine: Detecting the provisioner...
	I0311 21:53:53.576922   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.580079   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.580486   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.580516   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.580698   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:53.580912   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.581089   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.581262   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:53.581428   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:53.581637   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:53.581649   75727 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0311 21:53:53.698584   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0311 21:53:53.698672   75727 main.go:141] libmachine: found compatible host: buildroot
	I0311 21:53:53.698688   75727 main.go:141] libmachine: Provisioning with buildroot...
	I0311 21:53:53.698699   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:53:53.698996   75727 buildroot.go:166] provisioning hostname "newest-cni-649653"
	I0311 21:53:53.699023   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:53:53.699210   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.702170   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.702560   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.702595   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.702763   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:53.702951   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.703147   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.703342   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:53.703519   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:53.703662   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:53.703675   75727 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-649653 && echo "newest-cni-649653" | sudo tee /etc/hostname
	I0311 21:53:53.829333   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-649653
	
	I0311 21:53:53.829359   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.832115   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.832481   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.832511   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.832692   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:53.832908   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.833085   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:53.833218   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:53.833377   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:53.833577   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:53.833597   75727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-649653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-649653/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-649653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:53:53.951985   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:53:53.952013   75727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:53:53.952058   75727 buildroot.go:174] setting up certificates
	I0311 21:53:53.952072   75727 provision.go:84] configureAuth start
	I0311 21:53:53.952089   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:53:53.952337   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:53:53.955265   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.955545   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.955577   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.955773   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:53.958412   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.958775   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:53.958796   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:53.958900   75727 provision.go:143] copyHostCerts
	I0311 21:53:53.958973   75727 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:53:53.958985   75727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:53:53.959075   75727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:53:53.959184   75727 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:53:53.959196   75727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:53:53.959235   75727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:53:53.959313   75727 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:53:53.959321   75727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:53:53.959346   75727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:53:53.959395   75727 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.newest-cni-649653 san=[127.0.0.1 192.168.72.200 localhost minikube newest-cni-649653]
	I0311 21:53:54.150706   75727 provision.go:177] copyRemoteCerts
	I0311 21:53:54.150763   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:53:54.150784   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.153582   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.153935   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.153961   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.154165   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.154356   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.154536   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.154684   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:53:54.241042   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:53:54.270677   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:53:54.300851   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 21:53:54.328705   75727 provision.go:87] duration metric: took 376.618763ms to configureAuth
	I0311 21:53:54.328730   75727 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:53:54.328939   75727 config.go:182] Loaded profile config "newest-cni-649653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:53:54.329038   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.331628   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.331985   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.332015   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.332187   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.332363   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.332500   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.332673   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.332880   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:54.333072   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:54.333096   75727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:53:54.625255   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:53:54.625295   75727 main.go:141] libmachine: Checking connection to Docker...
	I0311 21:53:54.625308   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetURL
	I0311 21:53:54.626637   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Using libvirt version 6000000
	I0311 21:53:54.629212   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.629562   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.629594   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.629770   75727 main.go:141] libmachine: Docker is up and running!
	I0311 21:53:54.629789   75727 main.go:141] libmachine: Reticulating splines...
	I0311 21:53:54.629797   75727 client.go:171] duration metric: took 24.59635051s to LocalClient.Create
	I0311 21:53:54.629828   75727 start.go:167] duration metric: took 24.596423194s to libmachine.API.Create "newest-cni-649653"
	I0311 21:53:54.629840   75727 start.go:293] postStartSetup for "newest-cni-649653" (driver="kvm2")
	I0311 21:53:54.629856   75727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:53:54.629880   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.630114   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:53:54.630138   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.632260   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.632604   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.632624   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.632803   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.632969   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.633110   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.633241   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:53:54.721677   75727 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:53:54.726644   75727 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:53:54.726670   75727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:53:54.726729   75727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:53:54.726821   75727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:53:54.726943   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:53:54.738217   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:53:54.765778   75727 start.go:296] duration metric: took 135.928566ms for postStartSetup
	I0311 21:53:54.765822   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetConfigRaw
	I0311 21:53:54.766426   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:53:54.769252   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.769561   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.769587   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.769833   75727 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/config.json ...
	I0311 21:53:54.770013   75727 start.go:128] duration metric: took 24.754772148s to createHost
	I0311 21:53:54.770059   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.772273   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.772599   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.772627   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.772764   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.772947   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.773160   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.773337   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.773506   75727 main.go:141] libmachine: Using SSH client type: native
	I0311 21:53:54.773708   75727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:53:54.773723   75727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:53:54.889621   75727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710194034.860062119
	
	I0311 21:53:54.889646   75727 fix.go:216] guest clock: 1710194034.860062119
	I0311 21:53:54.889656   75727 fix.go:229] Guest: 2024-03-11 21:53:54.860062119 +0000 UTC Remote: 2024-03-11 21:53:54.770035432 +0000 UTC m=+24.881905345 (delta=90.026687ms)
	I0311 21:53:54.889700   75727 fix.go:200] guest clock delta is within tolerance: 90.026687ms
	I0311 21:53:54.889710   75727 start.go:83] releasing machines lock for "newest-cni-649653", held for 24.874581271s
	I0311 21:53:54.889732   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.890018   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:53:54.892706   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.893121   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.893149   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.893289   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.893749   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.893927   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:53:54.894008   75727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:53:54.894054   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.894284   75727 ssh_runner.go:195] Run: cat /version.json
	I0311 21:53:54.894329   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:53:54.896651   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.896981   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.897010   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.897028   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.897171   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.897331   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.897481   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.897509   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:54.897533   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:54.897616   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:53:54.897778   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:53:54.897913   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:53:54.898114   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:53:54.898282   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:53:54.978389   75727 ssh_runner.go:195] Run: systemctl --version
	I0311 21:53:54.999856   75727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:53:55.165475   75727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:53:55.172574   75727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:53:55.172635   75727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:53:55.190704   75727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:53:55.190725   75727 start.go:494] detecting cgroup driver to use...
	I0311 21:53:55.190773   75727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:53:55.211452   75727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:53:55.226259   75727 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:53:55.226314   75727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:53:55.242009   75727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:53:55.257562   75727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:53:55.383455   75727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:53:55.554383   75727 docker.go:233] disabling docker service ...
	I0311 21:53:55.554456   75727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:53:55.569224   75727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:53:55.584403   75727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:53:55.715319   75727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:53:55.852371   75727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:53:55.869679   75727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:53:55.893816   75727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:53:55.893883   75727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:53:55.905816   75727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:53:55.905867   75727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:53:55.917741   75727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:53:55.929470   75727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:53:55.941689   75727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:53:55.954149   75727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:53:55.965791   75727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:53:55.965847   75727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:53:55.980599   75727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:53:55.991382   75727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:53:56.116463   75727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:53:56.274461   75727 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:53:56.274546   75727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:53:56.280509   75727 start.go:562] Will wait 60s for crictl version
	I0311 21:53:56.280587   75727 ssh_runner.go:195] Run: which crictl
	I0311 21:53:56.285398   75727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:53:56.326218   75727 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:53:56.326310   75727 ssh_runner.go:195] Run: crio --version
	I0311 21:53:56.361133   75727 ssh_runner.go:195] Run: crio --version
	I0311 21:53:56.396638   75727 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:53:56.397886   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:53:56.400681   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:56.401093   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:53:56.401122   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:53:56.401361   75727 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:53:56.406263   75727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:53:56.422239   75727 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0311 21:53:56.423576   75727 kubeadm.go:877] updating cluster {Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:53:56.423715   75727 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:53:56.423796   75727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:53:56.464027   75727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:53:56.464086   75727 ssh_runner.go:195] Run: which lz4
	I0311 21:53:56.469092   75727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:53:56.474417   75727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:53:56.474448   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0311 21:53:58.184795   75727 crio.go:444] duration metric: took 1.715725311s to copy over tarball
	I0311 21:53:58.184855   75727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:54:00.846200   75727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661323281s)
	I0311 21:54:00.846224   75727 crio.go:451] duration metric: took 2.661404275s to extract the tarball
	I0311 21:54:00.846231   75727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:54:00.889345   75727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:54:00.939615   75727 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:54:00.939644   75727 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:54:00.939654   75727 kubeadm.go:928] updating node { 192.168.72.200 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:54:00.939800   75727 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-649653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:54:00.939889   75727 ssh_runner.go:195] Run: crio config
	I0311 21:54:01.002487   75727 cni.go:84] Creating CNI manager for ""
	I0311 21:54:01.002513   75727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:54:01.002528   75727 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0311 21:54:01.002554   75727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-649653 NodeName:newest-cni-649653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:54:01.002719   75727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-649653"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:54:01.002790   75727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:54:01.014123   75727 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:54:01.014181   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:54:01.025444   75727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0311 21:54:01.044878   75727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:54:01.064168   75727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0311 21:54:01.085853   75727 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0311 21:54:01.090627   75727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:54:01.107128   75727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:54:01.244930   75727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:54:01.276382   75727 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653 for IP: 192.168.72.200
	I0311 21:54:01.276413   75727 certs.go:194] generating shared ca certs ...
	I0311 21:54:01.276434   75727 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.276630   75727 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:54:01.276698   75727 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:54:01.276712   75727 certs.go:256] generating profile certs ...
	I0311 21:54:01.276807   75727 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.key
	I0311 21:54:01.276828   75727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.crt with IP's: []
	I0311 21:54:01.627941   75727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.crt ...
	I0311 21:54:01.627971   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.crt: {Name:mkf48f6f5efea8f700b7f0c847dacf2dd1d2e015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.628143   75727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.key ...
	I0311 21:54:01.628158   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.key: {Name:mk50dccdde388046496defc6928981b552d846f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.628265   75727 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9
	I0311 21:54:01.628284   75727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt.da5ea2e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.200]
	I0311 21:54:01.828611   75727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt.da5ea2e9 ...
	I0311 21:54:01.828638   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt.da5ea2e9: {Name:mk0347b1ae25febf5b63847a7ddfd2a05199f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.828798   75727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9 ...
	I0311 21:54:01.828812   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9: {Name:mkde9c302a709830ac1b06e65a9cb8dbe9e198a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.828878   75727 certs.go:381] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt.da5ea2e9 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt
	I0311 21:54:01.828959   75727 certs.go:385] copying /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9 -> /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key
	I0311 21:54:01.829022   75727 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key
	I0311 21:54:01.829037   75727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt with IP's: []
	I0311 21:54:01.931462   75727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt ...
	I0311 21:54:01.931490   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt: {Name:mkae49738e3b70f7be593c3b9fce3c08854baf14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.931655   75727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key ...
	I0311 21:54:01.931674   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key: {Name:mk96bdf422ac9f796c11a3a971f7b0b8e448149b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:01.931885   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:54:01.931937   75727 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:54:01.931951   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:54:01.931990   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:54:01.932021   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:54:01.932054   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:54:01.932102   75727 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:54:01.932701   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:54:01.962480   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:54:01.990778   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:54:02.019990   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:54:02.046427   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:54:02.074892   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:54:02.104767   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:54:02.134391   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:54:02.166867   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:54:02.194514   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:54:02.225355   75727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:54:02.253290   75727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:54:02.275178   75727 ssh_runner.go:195] Run: openssl version
	I0311 21:54:02.281835   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:54:02.296517   75727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:54:02.301671   75727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:54:02.301728   75727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:54:02.308307   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:54:02.321766   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:54:02.334830   75727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:54:02.339787   75727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:54:02.339838   75727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:54:02.346037   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:54:02.361915   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:54:02.375852   75727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:54:02.381433   75727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:54:02.381483   75727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:54:02.388334   75727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:54:02.402826   75727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:54:02.407714   75727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 21:54:02.407779   75727 kubeadm.go:391] StartCluster: {Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:54:02.407875   75727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:54:02.407945   75727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:54:02.455491   75727 cri.go:89] found id: ""
	I0311 21:54:02.455590   75727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 21:54:02.468270   75727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:54:02.480203   75727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:54:02.492223   75727 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:54:02.492241   75727 kubeadm.go:156] found existing configuration files:
	
	I0311 21:54:02.492296   75727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:54:02.503710   75727 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:54:02.503784   75727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:54:02.516510   75727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:54:02.527862   75727 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:54:02.527924   75727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:54:02.539625   75727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:54:02.550194   75727 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:54:02.550245   75727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:54:02.561657   75727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:54:02.575026   75727 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:54:02.575092   75727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:54:02.587271   75727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:54:02.722108   75727 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0311 21:54:02.722162   75727 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:54:02.873257   75727 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:54:02.873404   75727 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:54:02.873511   75727 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:54:03.156608   75727 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:54:03.193733   75727 out.go:204]   - Generating certificates and keys ...
	I0311 21:54:03.193865   75727 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:54:03.193960   75727 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:54:03.568463   75727 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 21:54:03.760913   75727 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 21:54:03.919423   75727 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 21:54:04.078749   75727 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 21:54:04.356013   75727 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 21:54:04.356582   75727 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-649653] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0311 21:54:04.471562   75727 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 21:54:04.471775   75727 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-649653] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0311 21:54:04.593524   75727 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 21:54:04.731682   75727 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 21:54:04.801313   75727 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 21:54:04.801592   75727 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:54:05.357683   75727 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:54:05.611334   75727 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0311 21:54:05.664186   75727 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:54:05.810194   75727 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:54:06.127505   75727 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:54:06.128316   75727 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:54:06.131379   75727 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:54:06.133066   75727 out.go:204]   - Booting up control plane ...
	I0311 21:54:06.133183   75727 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:54:06.133304   75727 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:54:06.133375   75727 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:54:06.154341   75727 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:54:06.154459   75727 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:54:06.154521   75727 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:54:06.306315   75727 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:54:12.807708   75727 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503824 seconds
	I0311 21:54:12.827358   75727 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:54:12.852154   75727 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:54:13.391122   75727 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:54:13.391378   75727 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-649653 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:54:13.905871   75727 kubeadm.go:309] [bootstrap-token] Using token: tpk8d0.p1x67f7vtd5pwmvt
	I0311 21:54:13.907328   75727 out.go:204]   - Configuring RBAC rules ...
	I0311 21:54:13.907485   75727 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:54:13.916426   75727 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:54:13.928858   75727 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:54:13.933891   75727 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:54:13.938157   75727 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:54:13.948917   75727 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:54:13.965910   75727 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:54:14.274241   75727 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:54:14.341311   75727 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:54:14.344492   75727 kubeadm.go:309] 
	I0311 21:54:14.344590   75727 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:54:14.344613   75727 kubeadm.go:309] 
	I0311 21:54:14.344710   75727 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:54:14.344722   75727 kubeadm.go:309] 
	I0311 21:54:14.344780   75727 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:54:14.344899   75727 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:54:14.344983   75727 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:54:14.344997   75727 kubeadm.go:309] 
	I0311 21:54:14.345096   75727 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:54:14.345115   75727 kubeadm.go:309] 
	I0311 21:54:14.345187   75727 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:54:14.345201   75727 kubeadm.go:309] 
	I0311 21:54:14.345260   75727 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:54:14.345384   75727 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:54:14.345486   75727 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:54:14.345499   75727 kubeadm.go:309] 
	I0311 21:54:14.345616   75727 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:54:14.345720   75727 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:54:14.345730   75727 kubeadm.go:309] 
	I0311 21:54:14.345847   75727 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token tpk8d0.p1x67f7vtd5pwmvt \
	I0311 21:54:14.345980   75727 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:54:14.346011   75727 kubeadm.go:309] 	--control-plane 
	I0311 21:54:14.346021   75727 kubeadm.go:309] 
	I0311 21:54:14.346127   75727 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:54:14.346155   75727 kubeadm.go:309] 
	I0311 21:54:14.346246   75727 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token tpk8d0.p1x67f7vtd5pwmvt \
	I0311 21:54:14.346363   75727 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:54:14.346489   75727 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:54:14.346515   75727 cni.go:84] Creating CNI manager for ""
	I0311 21:54:14.346524   75727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:54:14.348328   75727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:54:14.349881   75727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:54:14.382261   75727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:54:14.423980   75727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:54:14.424072   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:14.424087   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-649653 minikube.k8s.io/updated_at=2024_03_11T21_54_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=newest-cni-649653 minikube.k8s.io/primary=true
	I0311 21:54:14.503304   75727 ops.go:34] apiserver oom_adj: -16
	I0311 21:54:14.796782   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:15.297203   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:15.797485   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:16.297560   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:16.797305   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:17.297791   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:17.797499   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:18.297505   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:18.797458   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:19.297502   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:19.796820   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:20.296810   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:20.796882   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:21.296900   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:21.797720   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:22.296878   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:22.797592   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:23.297126   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:23.796888   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:24.297261   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:24.796936   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:25.296801   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:25.796868   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:26.296838   75727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:54:26.425986   75727 kubeadm.go:1106] duration metric: took 12.001973592s to wait for elevateKubeSystemPrivileges
	W0311 21:54:26.426024   75727 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:54:26.426046   75727 kubeadm.go:393] duration metric: took 24.018267839s to StartCluster
	I0311 21:54:26.426064   75727 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:26.426141   75727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:54:26.428978   75727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:54:26.429231   75727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 21:54:26.429243   75727 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:54:26.430965   75727 out.go:177] * Verifying Kubernetes components...
	I0311 21:54:26.429335   75727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:54:26.429457   75727 config.go:182] Loaded profile config "newest-cni-649653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:54:26.432279   75727 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-649653"
	I0311 21:54:26.432298   75727 addons.go:69] Setting default-storageclass=true in profile "newest-cni-649653"
	I0311 21:54:26.432319   75727 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-649653"
	I0311 21:54:26.432326   75727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-649653"
	I0311 21:54:26.432353   75727 host.go:66] Checking if "newest-cni-649653" exists ...
	I0311 21:54:26.432284   75727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:54:26.432812   75727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:54:26.432847   75727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:54:26.432901   75727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:54:26.432939   75727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:54:26.448084   75727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I0311 21:54:26.448573   75727 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:54:26.449193   75727 main.go:141] libmachine: Using API Version  1
	I0311 21:54:26.449254   75727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:54:26.449788   75727 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:54:26.450028   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:54:26.454282   75727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44791
	I0311 21:54:26.455027   75727 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:54:26.455543   75727 main.go:141] libmachine: Using API Version  1
	I0311 21:54:26.455571   75727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:54:26.455926   75727 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:54:26.456383   75727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:54:26.456427   75727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:54:26.457249   75727 addons.go:234] Setting addon default-storageclass=true in "newest-cni-649653"
	I0311 21:54:26.457285   75727 host.go:66] Checking if "newest-cni-649653" exists ...
	I0311 21:54:26.457582   75727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:54:26.457605   75727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:54:26.471248   75727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41763
	I0311 21:54:26.471720   75727 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:54:26.472419   75727 main.go:141] libmachine: Using API Version  1
	I0311 21:54:26.472448   75727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:54:26.472857   75727 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:54:26.473689   75727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0311 21:54:26.474040   75727 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:54:26.474142   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:54:26.474519   75727 main.go:141] libmachine: Using API Version  1
	I0311 21:54:26.474542   75727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:54:26.474935   75727 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:54:26.475529   75727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:54:26.475570   75727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:54:26.476065   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:54:26.478760   75727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:54:26.480509   75727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:54:26.480526   75727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:54:26.480541   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:54:26.484013   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:26.484470   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:54:26.484496   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:26.484642   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:54:26.484857   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:54:26.485035   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:54:26.485195   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:54:26.492498   75727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I0311 21:54:26.492901   75727 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:54:26.493651   75727 main.go:141] libmachine: Using API Version  1
	I0311 21:54:26.493669   75727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:54:26.493991   75727 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:54:26.494208   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:54:26.495682   75727 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:54:26.495948   75727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:54:26.495962   75727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:54:26.495977   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:54:26.498843   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:26.499199   75727 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:53:45 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:54:26.499224   75727 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:26.499358   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:54:26.499524   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:54:26.499669   75727 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:54:26.499791   75727 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:54:26.734197   75727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:54:26.734290   75727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 21:54:26.760900   75727 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:54:26.760973   75727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:54:26.877088   75727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:54:26.926024   75727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:54:27.441911   75727 start.go:948] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0311 21:54:27.441980   75727 api_server.go:72] duration metric: took 1.012705124s to wait for apiserver process to appear ...
	I0311 21:54:27.442002   75727 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:54:27.442030   75727 main.go:141] libmachine: Making call to close driver server
	I0311 21:54:27.442050   75727 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:54:27.442058   75727 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0311 21:54:27.442690   75727 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:54:27.442706   75727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:54:27.442757   75727 main.go:141] libmachine: Making call to close driver server
	I0311 21:54:27.442770   75727 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:54:27.442770   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Closing plugin on server side
	I0311 21:54:27.443017   75727 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:54:27.443029   75727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:54:27.449134   75727 api_server.go:279] https://192.168.72.200:8443/healthz returned 200:
	ok
	I0311 21:54:27.455128   75727 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:54:27.455153   75727 api_server.go:131] duration metric: took 13.144132ms to wait for apiserver health ...
	I0311 21:54:27.455163   75727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:54:27.468067   75727 system_pods.go:59] 7 kube-system pods found
	I0311 21:54:27.468098   75727 system_pods.go:61] "coredns-76f75df574-688gg" [0b3d26ae-e36c-437a-bcad-c7e8fa26a07b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:54:27.468110   75727 system_pods.go:61] "coredns-76f75df574-g85bm" [fd2f77c2-693a-4f09-b90c-0616dcddf189] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:54:27.468118   75727 system_pods.go:61] "etcd-newest-cni-649653" [0165fccf-11d5-4ee3-a496-5fda099385d1] Running
	I0311 21:54:27.468127   75727 system_pods.go:61] "kube-apiserver-newest-cni-649653" [e538de26-96b2-4028-afa3-2a78f71fa1c3] Running
	I0311 21:54:27.468141   75727 system_pods.go:61] "kube-controller-manager-newest-cni-649653" [8cafd132-158b-4b13-9a6c-ef4ff4c346cb] Running
	I0311 21:54:27.468148   75727 system_pods.go:61] "kube-proxy-bjqff" [1dd10b77-aa4f-48ba-bef8-6f60ff15b2a6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:54:27.468157   75727 system_pods.go:61] "kube-scheduler-newest-cni-649653" [2b449005-d1ea-4676-ba7a-4f36e7bf1bc2] Running
	I0311 21:54:27.468168   75727 system_pods.go:74] duration metric: took 12.998633ms to wait for pod list to return data ...
	I0311 21:54:27.468180   75727 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:54:27.483142   75727 default_sa.go:45] found service account: "default"
	I0311 21:54:27.483168   75727 default_sa.go:55] duration metric: took 14.977302ms for default service account to be created ...
	I0311 21:54:27.483182   75727 kubeadm.go:576] duration metric: took 1.053911115s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 21:54:27.483204   75727 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:54:27.483300   75727 main.go:141] libmachine: Making call to close driver server
	I0311 21:54:27.483325   75727 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:54:27.483595   75727 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:54:27.483609   75727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:54:27.504467   75727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:54:27.504494   75727 node_conditions.go:123] node cpu capacity is 2
	I0311 21:54:27.504510   75727 node_conditions.go:105] duration metric: took 21.299146ms to run NodePressure ...
	I0311 21:54:27.504524   75727 start.go:240] waiting for startup goroutines ...
	I0311 21:54:27.947334   75727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-649653" context rescaled to 1 replicas
	I0311 21:54:28.062776   75727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.136707514s)
	I0311 21:54:28.062824   75727 main.go:141] libmachine: Making call to close driver server
	I0311 21:54:28.062837   75727 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:54:28.063129   75727 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:54:28.063151   75727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:54:28.063160   75727 main.go:141] libmachine: Making call to close driver server
	I0311 21:54:28.063172   75727 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:54:28.063169   75727 main.go:141] libmachine: (newest-cni-649653) DBG | Closing plugin on server side
	I0311 21:54:28.063409   75727 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:54:28.063424   75727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:54:28.065014   75727 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0311 21:54:28.066386   75727 addons.go:505] duration metric: took 1.637049478s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0311 21:54:28.066418   75727 start.go:245] waiting for cluster config update ...
	I0311 21:54:28.066428   75727 start.go:254] writing updated cluster config ...
	I0311 21:54:28.066674   75727 ssh_runner.go:195] Run: rm -f paused
	I0311 21:54:28.149832   75727 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:54:28.151445   75727 out.go:177] * Done! kubectl is now configured to use "newest-cni-649653" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.902932256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbc1fb46-02fe-4d5a-9575-4e8d90578db6 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.904494580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25fd596e-253f-494b-b4ac-413f6accac40 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.904903864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194069904880188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25fd596e-253f-494b-b4ac-413f6accac40 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.905840746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f8cdf52-765a-4f05-8a89-8afc889c6218 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.905926605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f8cdf52-765a-4f05-8a89-8afc889c6218 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.906118988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9,PodSandboxId:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190961600303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,},Annotations:map[string]string{io.kubernetes.container.hash: 2b42a678,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8,PodSandboxId:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190977082194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,},Annotations:map[string]string{io.kubernetes.container.hash: ac3c9c5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5,PodSandboxId:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1710193190469893544,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8016b8d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1,PodSandboxId:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710193188958511076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,},Annotations:map[string]string{io.kubernetes.container.hash: 710f9e96,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2,PodSandboxId:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193169049097048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,},Annotations:map[string]string{io.kubernetes.container.hash: dfd8d50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575,PodSandboxId:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710193169107290776,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f,PodSandboxId:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193169046953603,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,},Annotations:map[string]string{io.kubernetes.container.hash: 9d16a9fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498,PodSandboxId:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193168998813148,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f8cdf52-765a-4f05-8a89-8afc889c6218 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.925119369Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7710a5c0-b8d9-45cb-ab47-d12aacb6ccad name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.925698142Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-hct77,Uid:ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193190594715867,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:39:48.783827402Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-58ct4,Uid:96fa2415-2468-4a6d-887f-5eb6e455bbea,Namespace:kube-s
ystem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193190571590800,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:39:48.763734278Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4c7b992dad641fc2394326207739e711f1a9c95ba15462067ff366dc4dc4940,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-9z7nz,Uid:6a161d6c-584f-47ef-86f2-40e7870d372e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193190484150304,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-9z7nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a161d6c-584f-47ef-86f2-40e7870d372e,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:m
ap[string]string{kubernetes.io/config.seen: 2024-03-11T21:39:50.156978534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2096cbb5-d96f-48f5-a04a-eb596646c8ed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193190377646171,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":
[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-11T21:39:50.071088943Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&PodSandboxMetadata{Name:kube-proxy-7xmlm,Uid:f18fd74c-17fa-44f1-a7e4-ab19fffe497b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193188807199067,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-11T21:39:48.494771116Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-743937,Uid:6a62d4b44a6092755ab406b1e90d15d2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193168786791627,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6a62d4b44a6092755ab406b1e90d15d2,kubernetes.io/config.seen: 2024-03-11T21:39:28.318988971Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&PodSandboxMetadata{Name:kube-apiserver
-embed-certs-743937,Uid:1be4934f6a04f3c4cd4c7f296acc8388,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193168771533785,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.114:8443,kubernetes.io/config.hash: 1be4934f6a04f3c4cd4c7f296acc8388,kubernetes.io/config.seen: 2024-03-11T21:39:28.318986888Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-743937,Uid:3ab71d9e2769e4182c88a6eb69c8122b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193168765933441,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,
io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.114:2379,kubernetes.io/config.hash: 3ab71d9e2769e4182c88a6eb69c8122b,kubernetes.io/config.seen: 2024-03-11T21:39:28.318975461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-743937,Uid:e1285b61656e642fefcf84d28bd25203,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710193168764639269,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: e1285b61656e642fefcf84d28bd25203,kubernetes.io/config.seen: 2024-03-11T21:39:28.318988123Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7710a5c0-b8d9-45cb-ab47-d12aacb6ccad name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.926717951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94a8185f-17d8-49d4-82a8-4a33379360a4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.926771336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94a8185f-17d8-49d4-82a8-4a33379360a4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.926980325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9,PodSandboxId:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190961600303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,},Annotations:map[string]string{io.kubernetes.container.hash: 2b42a678,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8,PodSandboxId:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190977082194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,},Annotations:map[string]string{io.kubernetes.container.hash: ac3c9c5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5,PodSandboxId:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1710193190469893544,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8016b8d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1,PodSandboxId:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710193188958511076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,},Annotations:map[string]string{io.kubernetes.container.hash: 710f9e96,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2,PodSandboxId:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193169049097048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,},Annotations:map[string]string{io.kubernetes.container.hash: dfd8d50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575,PodSandboxId:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710193169107290776,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f,PodSandboxId:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193169046953603,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,},Annotations:map[string]string{io.kubernetes.container.hash: 9d16a9fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498,PodSandboxId:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193168998813148,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94a8185f-17d8-49d4-82a8-4a33379360a4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.957536430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ed4335b-9ae5-4ff6-96c1-34e70988bc0e name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.957610554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ed4335b-9ae5-4ff6-96c1-34e70988bc0e name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.959727980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=591b7dd8-b8c5-48b4-bf94-1cffa9ac2b60 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.960611852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194069960584527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=591b7dd8-b8c5-48b4-bf94-1cffa9ac2b60 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.962046710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e74c7e53-f3f5-4998-af19-fbafeaa4ff81 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.962097214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e74c7e53-f3f5-4998-af19-fbafeaa4ff81 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:29 embed-certs-743937 crio[686]: time="2024-03-11 21:54:29.962861590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9,PodSandboxId:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190961600303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,},Annotations:map[string]string{io.kubernetes.container.hash: 2b42a678,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8,PodSandboxId:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190977082194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,},Annotations:map[string]string{io.kubernetes.container.hash: ac3c9c5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5,PodSandboxId:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1710193190469893544,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8016b8d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1,PodSandboxId:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710193188958511076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,},Annotations:map[string]string{io.kubernetes.container.hash: 710f9e96,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2,PodSandboxId:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193169049097048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,},Annotations:map[string]string{io.kubernetes.container.hash: dfd8d50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575,PodSandboxId:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710193169107290776,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f,PodSandboxId:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193169046953603,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,},Annotations:map[string]string{io.kubernetes.container.hash: 9d16a9fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498,PodSandboxId:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193168998813148,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e74c7e53-f3f5-4998-af19-fbafeaa4ff81 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:30 embed-certs-743937 crio[686]: time="2024-03-11 21:54:30.020359314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecfb3043-24af-4c79-b3f2-efeb7002d0fb name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:30 embed-certs-743937 crio[686]: time="2024-03-11 21:54:30.020516173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecfb3043-24af-4c79-b3f2-efeb7002d0fb name=/runtime.v1.RuntimeService/Version
	Mar 11 21:54:30 embed-certs-743937 crio[686]: time="2024-03-11 21:54:30.021855818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d66e9550-8353-47e7-aad4-f2e9b4944d5a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:30 embed-certs-743937 crio[686]: time="2024-03-11 21:54:30.022302210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194070022277177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d66e9550-8353-47e7-aad4-f2e9b4944d5a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:54:30 embed-certs-743937 crio[686]: time="2024-03-11 21:54:30.023063999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5033403b-db27-4403-8c01-1c343f8b43b8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:30 embed-certs-743937 crio[686]: time="2024-03-11 21:54:30.023113033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5033403b-db27-4403-8c01-1c343f8b43b8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:54:30 embed-certs-743937 crio[686]: time="2024-03-11 21:54:30.023305061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9,PodSandboxId:43387911d61cb4d07d6f1fb9b52b7769cfe6b47e58b83a4e5463857d1bc4c216,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190961600303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-58ct4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa2415-2468-4a6d-887f-5eb6e455bbea,},Annotations:map[string]string{io.kubernetes.container.hash: 2b42a678,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8,PodSandboxId:4f033c8242f61023c64508a0545af22b41c820d4ff51bce7ca65f7de639836b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193190977082194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hct77,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31,},Annotations:map[string]string{io.kubernetes.container.hash: ac3c9c5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5,PodSandboxId:a0b2d2af8dc36b2322fa28253098075739c367de4bad1995d47b81cebf24b347,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1710193190469893544,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2096cbb5-d96f-48f5-a04a-eb596646c8ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8016b8d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1,PodSandboxId:cd4ad099890fe71f332d6eec01f238230e611608b938e29ab6d8e8c77ac7e689,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710193188958511076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7xmlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f18fd74c-17fa-44f1-a7e4-ab19fffe497b,},Annotations:map[string]string{io.kubernetes.container.hash: 710f9e96,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2,PodSandboxId:050e29796e725a6f07f4cc48aef1f38c2a0aebf677e2719716918b6e65de342a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193169049097048,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ab71d9e2769e4182c88a6eb69c8122b,},Annotations:map[string]string{io.kubernetes.container.hash: dfd8d50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575,PodSandboxId:44c56a97476a82cf7683b3fe872c9a4d07df73b8972d1ccc3b6ba856fc0dd88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710193169107290776,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a62d4b44a6092755ab406b1e90d15d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f,PodSandboxId:239a8b464db4f02efd7749346c1df15d1845bf3bf367ae19492efe6e2c1b9ea5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193169046953603,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be4934f6a04f3c4cd4c7f296acc8388,},Annotations:map[string]string{io.kubernetes.container.hash: 9d16a9fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498,PodSandboxId:793cb1b96101c89dc8306ca2677f480c465f83d2707a1049b42e99f314a3e27e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193168998813148,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-743937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1285b61656e642fefcf84d28bd25203,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5033403b-db27-4403-8c01-1c343f8b43b8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4290fa687c68e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   4f033c8242f61       coredns-5dd5756b68-hct77
	7c735d180d5d0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   43387911d61cb       coredns-5dd5756b68-58ct4
	b933a93694d75       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   a0b2d2af8dc36       storage-provisioner
	11079c6b59c67       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   cd4ad099890fe       kube-proxy-7xmlm
	46ba015fd640f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   44c56a97476a8       kube-scheduler-embed-certs-743937
	0fe64dcf976f8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   050e29796e725       etcd-embed-certs-743937
	4204959d26a52       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   239a8b464db4f       kube-apiserver-embed-certs-743937
	33ce09219ccdf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   793cb1b96101c       kube-controller-manager-embed-certs-743937
	
	
	==> coredns [4290fa687c68e62428910cf34c67eba8505eebffa114ebfc5fabe86ed057e4a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [7c735d180d5d0680318bcfdd8e1508a82b2181aef6108badc75c9d29b0713af9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-743937
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-743937
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=embed-certs-743937
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T21_39_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:39:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-743937
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:54:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:50:09 +0000   Mon, 11 Mar 2024 21:39:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:50:09 +0000   Mon, 11 Mar 2024 21:39:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:50:09 +0000   Mon, 11 Mar 2024 21:39:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:50:09 +0000   Mon, 11 Mar 2024 21:39:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.114
	  Hostname:    embed-certs-743937
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7be0937769334b2a86e68256de27730e
	  System UUID:                7be09377-6933-4b2a-86e6-8256de27730e
	  Boot ID:                    c4b5ec1a-ad68-4b58-9017-148856cd6f08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-58ct4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5dd5756b68-hct77                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-743937                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-embed-certs-743937             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-embed-certs-743937    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-7xmlm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-743937             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-9z7nz               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node embed-certs-743937 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node embed-certs-743937 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node embed-certs-743937 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m   node-controller  Node embed-certs-743937 event: Registered Node embed-certs-743937 in Controller
	
	
	==> dmesg <==
	[  +0.056310] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047046] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.551317] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.498204] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.708342] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.912857] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.061955] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058452] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.200767] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.152643] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.312406] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +6.201291] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.070436] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.882269] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +6.626191] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.971657] kauditd_printk_skb: 74 callbacks suppressed
	[Mar11 21:39] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.909170] systemd-fstab-generator[3420]: Ignoring "noauto" option for root device
	[  +7.788719] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.090633] kauditd_printk_skb: 57 callbacks suppressed
	[ +12.386757] systemd-fstab-generator[3944]: Ignoring "noauto" option for root device
	[  +0.091140] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.150786] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [0fe64dcf976f8a0834063fd35ba390a65c7e0bfe5003a39b02b08afa61573aa2] <==
	{"level":"info","ts":"2024-03-11T21:39:29.668992Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f0e2ae880f3a35e5","initial-advertise-peer-urls":["https://192.168.50.114:2380"],"listen-peer-urls":["https://192.168.50.114:2380"],"advertise-client-urls":["https://192.168.50.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T21:39:29.669048Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T21:39:29.669162Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2024-03-11T21:39:29.669192Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2024-03-11T21:39:30.502716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-11T21:39:30.50279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-11T21:39:30.502808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 received MsgPreVoteResp from f0e2ae880f3a35e5 at term 1"}
	{"level":"info","ts":"2024-03-11T21:39:30.50282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T21:39:30.502827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 received MsgVoteResp from f0e2ae880f3a35e5 at term 2"}
	{"level":"info","ts":"2024-03-11T21:39:30.502862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became leader at term 2"}
	{"level":"info","ts":"2024-03-11T21:39:30.502869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0e2ae880f3a35e5 elected leader f0e2ae880f3a35e5 at term 2"}
	{"level":"info","ts":"2024-03-11T21:39:30.504561Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:39:30.506082Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f0e2ae880f3a35e5","local-member-attributes":"{Name:embed-certs-743937 ClientURLs:[https://192.168.50.114:2379]}","request-path":"/0/members/f0e2ae880f3a35e5/attributes","cluster-id":"659e1302ad88139d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T21:39:30.506704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:39:30.508434Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T21:39:30.508493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T21:39:30.509499Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"659e1302ad88139d","local-member-id":"f0e2ae880f3a35e5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:39:30.509714Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:39:30.507047Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:39:30.512632Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T21:39:30.512939Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:39:30.513709Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.114:2379"}
	{"level":"info","ts":"2024-03-11T21:49:30.576076Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-03-11T21:49:30.579155Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"2.20268ms","hash":6879168}
	{"level":"info","ts":"2024-03-11T21:49:30.5793Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":6879168,"revision":677,"compact-revision":-1}
	
	
	==> kernel <==
	 21:54:30 up 20 min,  0 users,  load average: 0.22, 0.14, 0.11
	Linux embed-certs-743937 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4204959d26a528a733e6a7fa26e1713a70b7e38a551fff229e5a4fea09488e0f] <==
	W0311 21:49:33.299808       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:49:33.299876       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:49:33.299885       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:49:33.299959       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:49:33.300135       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:49:33.301467       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:50:32.149930       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:50:33.300671       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:50:33.300849       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:50:33.300884       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:50:33.301780       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:50:33.301887       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:50:33.301928       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:51:32.149545       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 21:52:32.149622       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:52:33.301358       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:52:33.301708       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:52:33.301788       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:52:33.302532       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:52:33.302689       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:52:33.303807       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:53:32.150214       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [33ce09219ccdf054c50e8ba218609b581ede2f5176b69a7658537ca3028fd498] <==
	I0311 21:48:48.405829       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:49:17.929520       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:49:18.417657       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:49:47.935712       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:49:48.426767       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:50:17.942109       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:50:18.436727       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:50:47.947694       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:50:48.444852       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:50:58.944425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="301.866µs"
	I0311 21:51:10.940764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="150.341µs"
	E0311 21:51:17.953069       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:51:18.453137       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:51:47.961327       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:51:48.463861       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:52:17.967887       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:52:18.474358       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:52:47.974159       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:52:48.483896       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:53:17.980630       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:53:18.497207       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:53:47.986642       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:53:48.505506       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:54:17.992807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:54:18.517174       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [11079c6b59c6771cb52b55b16525d47ef7a0c4a1a3717185d973b0cdb18aadf1] <==
	I0311 21:39:49.257780       1 server_others.go:69] "Using iptables proxy"
	I0311 21:39:49.270995       1 node.go:141] Successfully retrieved node IP: 192.168.50.114
	I0311 21:39:49.328282       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 21:39:49.328348       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:39:49.331254       1 server_others.go:152] "Using iptables Proxier"
	I0311 21:39:49.331898       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:39:49.332101       1 server.go:846] "Version info" version="v1.28.4"
	I0311 21:39:49.332141       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:39:49.333562       1 config.go:188] "Starting service config controller"
	I0311 21:39:49.337601       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:39:49.337676       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:39:49.337683       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:39:49.340436       1 config.go:315] "Starting node config controller"
	I0311 21:39:49.340516       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:39:49.445436       1 shared_informer.go:318] Caches are synced for node config
	I0311 21:39:49.445460       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:39:49.445486       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [46ba015fd640fda2171160b84f0a095794044e81a7399129debb70a95b42a575] <==
	W0311 21:39:32.319326       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 21:39:32.320086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 21:39:32.319359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 21:39:32.319564       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 21:39:32.319615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 21:39:32.320654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 21:39:32.320606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 21:39:32.320640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 21:39:33.176303       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 21:39:33.177616       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:39:33.202743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 21:39:33.202847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 21:39:33.224577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 21:39:33.224713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 21:39:33.225339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 21:39:33.225486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 21:39:33.236944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 21:39:33.237140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 21:39:33.300836       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 21:39:33.300889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 21:39:33.354617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 21:39:33.354856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 21:39:33.562973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 21:39:33.563109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0311 21:39:36.113285       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:51:35 embed-certs-743937 kubelet[3752]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:51:35 embed-certs-743937 kubelet[3752]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:51:49 embed-certs-743937 kubelet[3752]: E0311 21:51:49.926085    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:52:03 embed-certs-743937 kubelet[3752]: E0311 21:52:03.926151    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:52:15 embed-certs-743937 kubelet[3752]: E0311 21:52:15.925569    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:52:27 embed-certs-743937 kubelet[3752]: E0311 21:52:27.927626    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:52:35 embed-certs-743937 kubelet[3752]: E0311 21:52:35.951166    3752 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:52:35 embed-certs-743937 kubelet[3752]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:52:35 embed-certs-743937 kubelet[3752]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:52:35 embed-certs-743937 kubelet[3752]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:52:35 embed-certs-743937 kubelet[3752]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:52:40 embed-certs-743937 kubelet[3752]: E0311 21:52:40.925232    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:52:53 embed-certs-743937 kubelet[3752]: E0311 21:52:53.925517    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:53:06 embed-certs-743937 kubelet[3752]: E0311 21:53:06.925874    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:53:20 embed-certs-743937 kubelet[3752]: E0311 21:53:20.924973    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:53:33 embed-certs-743937 kubelet[3752]: E0311 21:53:33.925246    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:53:35 embed-certs-743937 kubelet[3752]: E0311 21:53:35.950908    3752 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:53:35 embed-certs-743937 kubelet[3752]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:53:35 embed-certs-743937 kubelet[3752]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:53:35 embed-certs-743937 kubelet[3752]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:53:35 embed-certs-743937 kubelet[3752]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:53:45 embed-certs-743937 kubelet[3752]: E0311 21:53:45.926796    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:54:00 embed-certs-743937 kubelet[3752]: E0311 21:54:00.925175    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:54:15 embed-certs-743937 kubelet[3752]: E0311 21:54:15.926580    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	Mar 11 21:54:27 embed-certs-743937 kubelet[3752]: E0311 21:54:27.926450    3752 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9z7nz" podUID="6a161d6c-584f-47ef-86f2-40e7870d372e"
	
	
	==> storage-provisioner [b933a93694d7512040b9cc8038beec371ceaa7ae68f6990c4e899e1732503bd5] <==
	I0311 21:39:50.720018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 21:39:50.733845       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 21:39:50.733942       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 21:39:50.778995       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 21:39:50.779217       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-743937_5f22cdaf-7bd7-4fd5-aeea-671837d1c42a!
	I0311 21:39:50.779953       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e366ed60-4e73-471c-93f6-807bd709950c", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-743937_5f22cdaf-7bd7-4fd5-aeea-671837d1c42a became leader
	I0311 21:39:50.879491       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-743937_5f22cdaf-7bd7-4fd5-aeea-671837d1c42a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-743937 -n embed-certs-743937
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-743937 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9z7nz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-743937 describe pod metrics-server-57f55c9bc5-9z7nz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-743937 describe pod metrics-server-57f55c9bc5-9z7nz: exit status 1 (67.357298ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9z7nz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-743937 describe pod metrics-server-57f55c9bc5-9z7nz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (334.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (344.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-11 21:55:21.36392438 +0000 UTC m=+6329.535598686
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-766430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-766430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.323µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-766430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-766430 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-766430 logs -n 25: (1.396633457s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:53 UTC | 11 Mar 24 21:53 UTC |
	| start   | -p newest-cni-649653 --memory=2200 --alsologtostderr   | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:53 UTC | 11 Mar 24 21:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:54 UTC | 11 Mar 24 21:54 UTC |
	| addons  | enable metrics-server -p newest-cni-649653             | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:54 UTC | 11 Mar 24 21:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-649653                                   | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:54 UTC | 11 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:54 UTC | 11 Mar 24 21:54 UTC |
	| addons  | enable dashboard -p newest-cni-649653                  | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:54 UTC | 11 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-649653 --memory=2200 --alsologtostderr   | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:54 UTC | 11 Mar 24 21:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-649653 image list                           | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:55 UTC | 11 Mar 24 21:55 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-649653                                   | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:55 UTC | 11 Mar 24 21:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-649653                                   | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:55 UTC | 11 Mar 24 21:55 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-649653                                   | newest-cni-649653            | jenkins | v1.32.0 | 11 Mar 24 21:55 UTC |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:54:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:54:40.382555   76616 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:54:40.382847   76616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:54:40.382860   76616 out.go:304] Setting ErrFile to fd 2...
	I0311 21:54:40.382865   76616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:54:40.383131   76616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:54:40.383850   76616 out.go:298] Setting JSON to false
	I0311 21:54:40.385096   76616 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9429,"bootTime":1710184651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:54:40.385174   76616 start.go:139] virtualization: kvm guest
	I0311 21:54:40.388654   76616 out.go:177] * [newest-cni-649653] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:54:40.390388   76616 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:54:40.390434   76616 notify.go:220] Checking for updates...
	I0311 21:54:40.392048   76616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:54:40.393833   76616 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:54:40.395289   76616 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:54:40.396699   76616 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:54:40.398100   76616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:54:40.399843   76616 config.go:182] Loaded profile config "newest-cni-649653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:54:40.400384   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:54:40.400431   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:54:40.416689   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0311 21:54:40.417079   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:54:40.417602   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:54:40.417621   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:54:40.417908   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:54:40.418063   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:54:40.418313   76616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:54:40.418567   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:54:40.418612   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:54:40.432888   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I0311 21:54:40.433300   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:54:40.433762   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:54:40.433786   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:54:40.434085   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:54:40.434258   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:54:40.466649   76616 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:54:40.467943   76616 start.go:297] selected driver: kvm2
	I0311 21:54:40.467959   76616 start.go:901] validating driver "kvm2" against &{Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:54:40.468050   76616 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:54:40.468673   76616 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:54:40.468755   76616 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:54:40.481864   76616 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:54:40.482237   76616 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 21:54:40.482276   76616 cni.go:84] Creating CNI manager for ""
	I0311 21:54:40.482287   76616 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:54:40.482332   76616 start.go:340] cluster config:
	{Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:54:40.482446   76616 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:54:40.484107   76616 out.go:177] * Starting "newest-cni-649653" primary control-plane node in "newest-cni-649653" cluster
	I0311 21:54:40.485375   76616 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:54:40.485408   76616 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0311 21:54:40.485416   76616 cache.go:56] Caching tarball of preloaded images
	I0311 21:54:40.485517   76616 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:54:40.485532   76616 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0311 21:54:40.485657   76616 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/config.json ...
	I0311 21:54:40.485880   76616 start.go:360] acquireMachinesLock for newest-cni-649653: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:54:40.485938   76616 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "newest-cni-649653"
	I0311 21:54:40.485955   76616 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:54:40.485961   76616 fix.go:54] fixHost starting: 
	I0311 21:54:40.486254   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:54:40.486286   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:54:40.500104   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0311 21:54:40.500488   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:54:40.500910   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:54:40.500936   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:54:40.501206   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:54:40.501405   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:54:40.501559   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:54:40.502975   76616 fix.go:112] recreateIfNeeded on newest-cni-649653: state=Stopped err=<nil>
	I0311 21:54:40.503001   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	W0311 21:54:40.503182   76616 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:54:40.504874   76616 out.go:177] * Restarting existing kvm2 VM for "newest-cni-649653" ...
	I0311 21:54:40.506062   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Start
	I0311 21:54:40.506224   76616 main.go:141] libmachine: (newest-cni-649653) Ensuring networks are active...
	I0311 21:54:40.506857   76616 main.go:141] libmachine: (newest-cni-649653) Ensuring network default is active
	I0311 21:54:40.507204   76616 main.go:141] libmachine: (newest-cni-649653) Ensuring network mk-newest-cni-649653 is active
	I0311 21:54:40.507526   76616 main.go:141] libmachine: (newest-cni-649653) Getting domain xml...
	I0311 21:54:40.508185   76616 main.go:141] libmachine: (newest-cni-649653) Creating domain...
	I0311 21:54:41.708223   76616 main.go:141] libmachine: (newest-cni-649653) Waiting to get IP...
	I0311 21:54:41.709047   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:41.709488   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:41.709552   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:41.709479   76651 retry.go:31] will retry after 211.141888ms: waiting for machine to come up
	I0311 21:54:41.921829   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:41.922322   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:41.922348   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:41.922271   76651 retry.go:31] will retry after 334.049372ms: waiting for machine to come up
	I0311 21:54:42.258024   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:42.258514   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:42.258543   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:42.258475   76651 retry.go:31] will retry after 457.418034ms: waiting for machine to come up
	I0311 21:54:42.716950   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:42.717376   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:42.717406   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:42.717340   76651 retry.go:31] will retry after 576.924401ms: waiting for machine to come up
	I0311 21:54:43.296106   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:43.296468   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:43.296517   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:43.296420   76651 retry.go:31] will retry after 607.798402ms: waiting for machine to come up
	I0311 21:54:43.906153   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:43.906604   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:43.906631   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:43.906548   76651 retry.go:31] will retry after 755.119314ms: waiting for machine to come up
	I0311 21:54:44.662881   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:44.663388   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:44.663418   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:44.663332   76651 retry.go:31] will retry after 1.000825975s: waiting for machine to come up
	I0311 21:54:45.665886   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:45.666312   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:45.666344   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:45.666257   76651 retry.go:31] will retry after 1.179916822s: waiting for machine to come up
	I0311 21:54:46.847508   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:46.847865   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:46.847893   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:46.847838   76651 retry.go:31] will retry after 1.668192714s: waiting for machine to come up
	I0311 21:54:48.517897   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:48.518371   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:48.518410   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:48.518337   76651 retry.go:31] will retry after 1.509723162s: waiting for machine to come up
	I0311 21:54:50.029978   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:50.030437   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:50.030467   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:50.030394   76651 retry.go:31] will retry after 2.891006897s: waiting for machine to come up
	I0311 21:54:52.922483   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:52.922883   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:52.922914   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:52.922830   76651 retry.go:31] will retry after 3.265012807s: waiting for machine to come up
	I0311 21:54:56.188999   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:54:56.189458   76616 main.go:141] libmachine: (newest-cni-649653) DBG | unable to find current IP address of domain newest-cni-649653 in network mk-newest-cni-649653
	I0311 21:54:56.189509   76616 main.go:141] libmachine: (newest-cni-649653) DBG | I0311 21:54:56.189411   76651 retry.go:31] will retry after 4.32186618s: waiting for machine to come up
	I0311 21:55:00.512732   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.513147   76616 main.go:141] libmachine: (newest-cni-649653) Found IP for machine: 192.168.72.200
	I0311 21:55:00.513168   76616 main.go:141] libmachine: (newest-cni-649653) Reserving static IP address...
	I0311 21:55:00.513180   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has current primary IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.513522   76616 main.go:141] libmachine: (newest-cni-649653) Reserved static IP address: 192.168.72.200
	I0311 21:55:00.513546   76616 main.go:141] libmachine: (newest-cni-649653) Waiting for SSH to be available...
	I0311 21:55:00.513568   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "newest-cni-649653", mac: "52:54:00:de:e6:a4", ip: "192.168.72.200"} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:00.513624   76616 main.go:141] libmachine: (newest-cni-649653) DBG | skip adding static IP to network mk-newest-cni-649653 - found existing host DHCP lease matching {name: "newest-cni-649653", mac: "52:54:00:de:e6:a4", ip: "192.168.72.200"}
	I0311 21:55:00.513651   76616 main.go:141] libmachine: (newest-cni-649653) DBG | Getting to WaitForSSH function...
	I0311 21:55:00.515535   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.515854   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:00.515886   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.516097   76616 main.go:141] libmachine: (newest-cni-649653) DBG | Using SSH client type: external
	I0311 21:55:00.516123   76616 main.go:141] libmachine: (newest-cni-649653) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa (-rw-------)
	I0311 21:55:00.516160   76616 main.go:141] libmachine: (newest-cni-649653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:55:00.516186   76616 main.go:141] libmachine: (newest-cni-649653) DBG | About to run SSH command:
	I0311 21:55:00.516199   76616 main.go:141] libmachine: (newest-cni-649653) DBG | exit 0
	I0311 21:55:00.640967   76616 main.go:141] libmachine: (newest-cni-649653) DBG | SSH cmd err, output: <nil>: 
	I0311 21:55:00.641334   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetConfigRaw
	I0311 21:55:00.642412   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:55:00.645340   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.645712   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:00.645746   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.645919   76616 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/config.json ...
	I0311 21:55:00.646122   76616 machine.go:94] provisionDockerMachine start ...
	I0311 21:55:00.646171   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:00.646382   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:00.648441   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.648727   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:00.648765   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.648902   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:00.649050   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:00.649159   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:00.649279   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:00.649436   76616 main.go:141] libmachine: Using SSH client type: native
	I0311 21:55:00.649624   76616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:55:00.649641   76616 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:55:00.753341   76616 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:55:00.753377   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:55:00.753647   76616 buildroot.go:166] provisioning hostname "newest-cni-649653"
	I0311 21:55:00.753677   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:55:00.753833   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:00.756016   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.756458   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:00.756487   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.756650   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:00.756833   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:00.757013   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:00.757182   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:00.757394   76616 main.go:141] libmachine: Using SSH client type: native
	I0311 21:55:00.757558   76616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:55:00.757574   76616 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-649653 && echo "newest-cni-649653" | sudo tee /etc/hostname
	I0311 21:55:00.876525   76616 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-649653
	
	I0311 21:55:00.876550   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:00.879479   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.879795   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:00.879821   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.880039   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:00.880309   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:00.880486   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:00.880647   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:00.880850   76616 main.go:141] libmachine: Using SSH client type: native
	I0311 21:55:00.881086   76616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:55:00.881105   76616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-649653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-649653/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-649653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:55:00.996165   76616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:55:00.996197   76616 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:55:00.996242   76616 buildroot.go:174] setting up certificates
	I0311 21:55:00.996251   76616 provision.go:84] configureAuth start
	I0311 21:55:00.996265   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetMachineName
	I0311 21:55:00.996523   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:55:00.999216   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.999567   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:00.999596   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:00.999727   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:01.002019   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.002360   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:01.002388   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.002547   76616 provision.go:143] copyHostCerts
	I0311 21:55:01.002614   76616 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:55:01.002628   76616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:55:01.002709   76616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:55:01.002815   76616 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:55:01.002826   76616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:55:01.002865   76616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:55:01.002947   76616 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:55:01.002957   76616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:55:01.002992   76616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:55:01.003101   76616 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.newest-cni-649653 san=[127.0.0.1 192.168.72.200 localhost minikube newest-cni-649653]
	I0311 21:55:01.060512   76616 provision.go:177] copyRemoteCerts
	I0311 21:55:01.060584   76616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:55:01.060613   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:01.063018   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.063336   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:01.063370   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.063530   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:01.063712   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:01.063862   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:01.063998   76616 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:55:01.147774   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:55:01.175193   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:55:01.201367   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:55:01.227779   76616 provision.go:87] duration metric: took 231.511908ms to configureAuth
	I0311 21:55:01.227803   76616 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:55:01.227979   76616 config.go:182] Loaded profile config "newest-cni-649653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:55:01.228073   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:01.230620   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.231009   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:01.231040   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.231220   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:01.231401   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:01.231554   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:01.231684   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:01.231862   76616 main.go:141] libmachine: Using SSH client type: native
	I0311 21:55:01.232031   76616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:55:01.232053   76616 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:55:01.524944   76616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:55:01.524976   76616 machine.go:97] duration metric: took 878.839721ms to provisionDockerMachine
	I0311 21:55:01.524989   76616 start.go:293] postStartSetup for "newest-cni-649653" (driver="kvm2")
	I0311 21:55:01.525005   76616 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:55:01.525029   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:01.525361   76616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:55:01.525389   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:01.528092   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.528485   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:01.528521   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.528673   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:01.528860   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:01.529000   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:01.529116   76616 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:55:01.612545   76616 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:55:01.617219   76616 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:55:01.617244   76616 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:55:01.617310   76616 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:55:01.617386   76616 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:55:01.617464   76616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:55:01.629036   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:55:01.655732   76616 start.go:296] duration metric: took 130.726358ms for postStartSetup
	I0311 21:55:01.655769   76616 fix.go:56] duration metric: took 21.169808367s for fixHost
	I0311 21:55:01.655789   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:01.658370   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.658728   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:01.658766   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.658974   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:01.659178   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:01.659368   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:01.659514   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:01.659684   76616 main.go:141] libmachine: Using SSH client type: native
	I0311 21:55:01.659854   76616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0311 21:55:01.659864   76616 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:55:01.766349   76616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710194101.747054892
	
	I0311 21:55:01.766377   76616 fix.go:216] guest clock: 1710194101.747054892
	I0311 21:55:01.766388   76616 fix.go:229] Guest: 2024-03-11 21:55:01.747054892 +0000 UTC Remote: 2024-03-11 21:55:01.655773782 +0000 UTC m=+21.323020101 (delta=91.28111ms)
	I0311 21:55:01.766417   76616 fix.go:200] guest clock delta is within tolerance: 91.28111ms
	I0311 21:55:01.766429   76616 start.go:83] releasing machines lock for "newest-cni-649653", held for 21.280480001s
	I0311 21:55:01.766451   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:01.766693   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:55:01.769061   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.769406   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:01.769442   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.769570   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:01.770125   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:01.770310   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:01.770401   76616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:55:01.770457   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:01.770543   76616 ssh_runner.go:195] Run: cat /version.json
	I0311 21:55:01.770567   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:01.773069   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.773263   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.773398   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:01.773418   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.773570   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:01.773648   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:01.773671   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:01.773718   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:01.773821   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:01.773893   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:01.773959   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:01.774026   76616 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:55:01.774080   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:01.774202   76616 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:55:01.874208   76616 ssh_runner.go:195] Run: systemctl --version
	I0311 21:55:01.880460   76616 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:55:02.026243   76616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:55:02.033370   76616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:55:02.033430   76616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:55:02.050423   76616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:55:02.050445   76616 start.go:494] detecting cgroup driver to use...
	I0311 21:55:02.050497   76616 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:55:02.066752   76616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:55:02.080026   76616 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:55:02.080067   76616 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:55:02.094253   76616 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:55:02.108066   76616 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:55:02.222324   76616 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:55:02.391968   76616 docker.go:233] disabling docker service ...
	I0311 21:55:02.392047   76616 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:55:02.407218   76616 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:55:02.421227   76616 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:55:02.542534   76616 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:55:02.655913   76616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:55:02.671324   76616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:55:02.691500   76616 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:55:02.691553   76616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:55:02.702458   76616 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:55:02.702508   76616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:55:02.713504   76616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:55:02.724403   76616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:55:02.736031   76616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:55:02.747058   76616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:55:02.756580   76616 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:55:02.756627   76616 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:55:02.769888   76616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:55:02.779342   76616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:55:02.901262   76616 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:55:03.057311   76616 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:55:03.057369   76616 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:55:03.063033   76616 start.go:562] Will wait 60s for crictl version
	I0311 21:55:03.063091   76616 ssh_runner.go:195] Run: which crictl
	I0311 21:55:03.067647   76616 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:55:03.107525   76616 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:55:03.107593   76616 ssh_runner.go:195] Run: crio --version
	I0311 21:55:03.138192   76616 ssh_runner.go:195] Run: crio --version
	I0311 21:55:03.171566   76616 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:55:03.172599   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetIP
	I0311 21:55:03.175256   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:03.175558   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:03.175588   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:03.175753   76616 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:55:03.180605   76616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:55:03.196332   76616 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0311 21:55:03.197502   76616 kubeadm.go:877] updating cluster {Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:55:03.197645   76616 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:55:03.197695   76616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:55:03.241779   76616 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:55:03.241847   76616 ssh_runner.go:195] Run: which lz4
	I0311 21:55:03.246431   76616 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:55:03.251077   76616 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:55:03.251163   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0311 21:55:04.924635   76616 crio.go:444] duration metric: took 1.678228159s to copy over tarball
	I0311 21:55:04.924708   76616 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:55:07.571950   76616 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.64721216s)
	I0311 21:55:07.571986   76616 crio.go:451] duration metric: took 2.647325768s to extract the tarball
	I0311 21:55:07.571998   76616 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:55:07.611289   76616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:55:07.661537   76616 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:55:07.661566   76616 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:55:07.661581   76616 kubeadm.go:928] updating node { 192.168.72.200 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:55:07.661719   76616 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-649653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:55:07.661801   76616 ssh_runner.go:195] Run: crio config
	I0311 21:55:07.717982   76616 cni.go:84] Creating CNI manager for ""
	I0311 21:55:07.718002   76616 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:55:07.718012   76616 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0311 21:55:07.718033   76616 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-649653 NodeName:newest-cni-649653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:55:07.718163   76616 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-649653"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:55:07.718225   76616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:55:07.730085   76616 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:55:07.730139   76616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:55:07.740783   76616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0311 21:55:07.761264   76616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:55:07.779822   76616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0311 21:55:07.799591   76616 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0311 21:55:07.803947   76616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:55:07.817017   76616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:55:07.938206   76616 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:55:07.966195   76616 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653 for IP: 192.168.72.200
	I0311 21:55:07.966213   76616 certs.go:194] generating shared ca certs ...
	I0311 21:55:07.966228   76616 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:55:07.966389   76616 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:55:07.966444   76616 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:55:07.966469   76616 certs.go:256] generating profile certs ...
	I0311 21:55:07.966567   76616 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/client.key
	I0311 21:55:07.966646   76616 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key.da5ea2e9
	I0311 21:55:07.966693   76616 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key
	I0311 21:55:07.966831   76616 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:55:07.966876   76616 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:55:07.966889   76616 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:55:07.966923   76616 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:55:07.966956   76616 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:55:07.966986   76616 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:55:07.967039   76616 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:55:07.967655   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:55:08.022352   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:55:08.056419   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:55:08.108610   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:55:08.141674   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:55:08.180417   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:55:08.207095   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:55:08.233216   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/newest-cni-649653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:55:08.260005   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:55:08.287073   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:55:08.313985   76616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:55:08.339742   76616 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:55:08.358176   76616 ssh_runner.go:195] Run: openssl version
	I0311 21:55:08.364686   76616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:55:08.377659   76616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:55:08.382485   76616 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:55:08.382530   76616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:55:08.388526   76616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:55:08.401124   76616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:55:08.413632   76616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:55:08.418631   76616 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:55:08.418671   76616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:55:08.424874   76616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:55:08.440233   76616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:55:08.452428   76616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:55:08.457275   76616 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:55:08.457327   76616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:55:08.463383   76616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:55:08.476116   76616 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:55:08.481099   76616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:55:08.487762   76616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:55:08.493933   76616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:55:08.500133   76616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:55:08.506497   76616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:55:08.513015   76616 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:55:08.519297   76616 kubeadm.go:391] StartCluster: {Name:newest-cni-649653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-649653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:55:08.519426   76616 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:55:08.519476   76616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:55:08.566628   76616 cri.go:89] found id: ""
	I0311 21:55:08.566724   76616 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:55:08.580146   76616 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:55:08.580175   76616 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:55:08.580182   76616 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:55:08.580231   76616 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:55:08.593030   76616 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:55:08.593728   76616 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-649653" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:55:08.594093   76616 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-11004/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-649653" cluster setting kubeconfig missing "newest-cni-649653" context setting]
	I0311 21:55:08.594825   76616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:55:08.675175   76616 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:55:08.687425   76616 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.200
	I0311 21:55:08.687465   76616 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:55:08.687480   76616 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:55:08.687536   76616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:55:08.728850   76616 cri.go:89] found id: ""
	I0311 21:55:08.728921   76616 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:55:08.747045   76616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:55:08.758271   76616 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:55:08.758299   76616 kubeadm.go:156] found existing configuration files:
	
	I0311 21:55:08.758345   76616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:55:08.768995   76616 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:55:08.769046   76616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:55:08.780645   76616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:55:08.792618   76616 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:55:08.792695   76616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:55:08.804157   76616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:55:08.816319   76616 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:55:08.816371   76616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:55:08.828380   76616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:55:08.840220   76616 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:55:08.840273   76616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:55:08.850940   76616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:55:08.864793   76616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:55:08.984713   76616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:55:09.735745   76616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:55:09.946982   76616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:55:10.022529   76616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:55:10.109193   76616 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:55:10.109281   76616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:55:10.609852   76616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:55:11.109355   76616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:55:11.610417   76616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:55:11.629378   76616 api_server.go:72] duration metric: took 1.520185058s to wait for apiserver process to appear ...
	I0311 21:55:11.629406   76616 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:55:11.629429   76616 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0311 21:55:14.054540   76616 api_server.go:279] https://192.168.72.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:55:14.054583   76616 api_server.go:103] status: https://192.168.72.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:55:14.054600   76616 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0311 21:55:14.073247   76616 api_server.go:279] https://192.168.72.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:55:14.073277   76616 api_server.go:103] status: https://192.168.72.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:55:14.130484   76616 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0311 21:55:14.139301   76616 api_server.go:279] https://192.168.72.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:55:14.139322   76616 api_server.go:103] status: https://192.168.72.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:55:14.629853   76616 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0311 21:55:14.634577   76616 api_server.go:279] https://192.168.72.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:55:14.634601   76616 api_server.go:103] status: https://192.168.72.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:55:15.130359   76616 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0311 21:55:15.134750   76616 api_server.go:279] https://192.168.72.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:55:15.134784   76616 api_server.go:103] status: https://192.168.72.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:55:15.629918   76616 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0311 21:55:15.639103   76616 api_server.go:279] https://192.168.72.200:8443/healthz returned 200:
	ok
	I0311 21:55:15.646545   76616 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:55:15.646570   76616 api_server.go:131] duration metric: took 4.017156024s to wait for apiserver health ...
	I0311 21:55:15.646578   76616 cni.go:84] Creating CNI manager for ""
	I0311 21:55:15.646584   76616 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:55:15.648166   76616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:55:15.649465   76616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:55:15.662775   76616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:55:15.687859   76616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:55:15.705070   76616 system_pods.go:59] 8 kube-system pods found
	I0311 21:55:15.705100   76616 system_pods.go:61] "coredns-76f75df574-688gg" [0b3d26ae-e36c-437a-bcad-c7e8fa26a07b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:55:15.705107   76616 system_pods.go:61] "etcd-newest-cni-649653" [0165fccf-11d5-4ee3-a496-5fda099385d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:55:15.705114   76616 system_pods.go:61] "kube-apiserver-newest-cni-649653" [e538de26-96b2-4028-afa3-2a78f71fa1c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:55:15.705119   76616 system_pods.go:61] "kube-controller-manager-newest-cni-649653" [8cafd132-158b-4b13-9a6c-ef4ff4c346cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:55:15.705124   76616 system_pods.go:61] "kube-proxy-bjqff" [1dd10b77-aa4f-48ba-bef8-6f60ff15b2a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:55:15.705133   76616 system_pods.go:61] "kube-scheduler-newest-cni-649653" [2b449005-d1ea-4676-ba7a-4f36e7bf1bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:55:15.705138   76616 system_pods.go:61] "metrics-server-57f55c9bc5-vrmgs" [43d932a1-a65f-47ff-a404-75b979b6ac84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:55:15.705142   76616 system_pods.go:61] "storage-provisioner" [452cbfa8-db06-40fe-a5d3-d7dd34269448] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:55:15.705149   76616 system_pods.go:74] duration metric: took 17.271081ms to wait for pod list to return data ...
	I0311 21:55:15.705157   76616 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:55:15.708505   76616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:55:15.708526   76616 node_conditions.go:123] node cpu capacity is 2
	I0311 21:55:15.708535   76616 node_conditions.go:105] duration metric: took 3.374395ms to run NodePressure ...
	I0311 21:55:15.708552   76616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:55:16.012338   76616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:55:16.024287   76616 ops.go:34] apiserver oom_adj: -16
	I0311 21:55:16.024307   76616 kubeadm.go:591] duration metric: took 7.444119686s to restartPrimaryControlPlane
	I0311 21:55:16.024315   76616 kubeadm.go:393] duration metric: took 7.50502881s to StartCluster
	I0311 21:55:16.024329   76616 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:55:16.024399   76616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:55:16.025561   76616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:55:16.025776   76616 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:55:16.027506   76616 out.go:177] * Verifying Kubernetes components...
	I0311 21:55:16.025844   76616 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:55:16.025986   76616 config.go:182] Loaded profile config "newest-cni-649653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:55:16.028932   76616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:55:16.028952   76616 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-649653"
	I0311 21:55:16.028962   76616 addons.go:69] Setting dashboard=true in profile "newest-cni-649653"
	I0311 21:55:16.028986   76616 addons.go:234] Setting addon dashboard=true in "newest-cni-649653"
	I0311 21:55:16.028991   76616 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-649653"
	W0311 21:55:16.028998   76616 addons.go:243] addon dashboard should already be in state true
	I0311 21:55:16.028994   76616 addons.go:69] Setting default-storageclass=true in profile "newest-cni-649653"
	I0311 21:55:16.029033   76616 host.go:66] Checking if "newest-cni-649653" exists ...
	W0311 21:55:16.028999   76616 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:55:16.029053   76616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-649653"
	I0311 21:55:16.029095   76616 host.go:66] Checking if "newest-cni-649653" exists ...
	I0311 21:55:16.028955   76616 addons.go:69] Setting metrics-server=true in profile "newest-cni-649653"
	I0311 21:55:16.029133   76616 addons.go:234] Setting addon metrics-server=true in "newest-cni-649653"
	W0311 21:55:16.029142   76616 addons.go:243] addon metrics-server should already be in state true
	I0311 21:55:16.029176   76616 host.go:66] Checking if "newest-cni-649653" exists ...
	I0311 21:55:16.029439   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.029462   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.029495   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.029512   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.029529   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.029541   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.029547   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.029599   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.045406   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38001
	I0311 21:55:16.046049   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.046632   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.046656   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.047041   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.047590   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.047666   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.047757   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42773
	I0311 21:55:16.048230   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.048766   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.048786   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.049083   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43333
	I0311 21:55:16.049252   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0311 21:55:16.049271   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.049437   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.049499   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:55:16.049552   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.049838   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.049856   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.049971   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.049980   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.050378   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.050390   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.050975   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.051019   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.051263   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.051299   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.052173   76616 addons.go:234] Setting addon default-storageclass=true in "newest-cni-649653"
	W0311 21:55:16.052187   76616 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:55:16.052206   76616 host.go:66] Checking if "newest-cni-649653" exists ...
	I0311 21:55:16.052433   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.052462   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.064668   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I0311 21:55:16.065513   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.066012   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.066040   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.066205   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0311 21:55:16.066394   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.066544   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:55:16.066603   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.066996   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.067021   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.067411   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.067605   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:55:16.068630   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:16.070911   76616 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0311 21:55:16.069267   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:16.070098   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0311 21:55:16.073732   76616 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0311 21:55:16.072699   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.075216   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0311 21:55:16.075240   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0311 21:55:16.075250   76616 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:55:16.076813   76616 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:55:16.076826   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:55:16.076839   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:16.075262   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:16.075959   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.076901   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.077254   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.077462   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:55:16.079904   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:16.081680   76616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:55:16.080540   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0311 21:55:16.080546   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:16.080814   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:16.081418   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:16.081426   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:16.083282   76616 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:55:16.083298   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:55:16.083314   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:16.083467   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:16.083483   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:16.083545   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:16.083570   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:16.083635   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:16.083682   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:16.083751   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.083754   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:16.083862   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:16.084116   76616 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:55:16.084194   76616 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:55:16.085014   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.085039   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.085700   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.086051   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:16.086404   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:16.086423   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:16.086639   76616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:55:16.086671   76616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:55:16.086642   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:16.086903   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:16.087065   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:16.087189   76616 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:55:16.101377   76616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39225
	I0311 21:55:16.101834   76616 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:55:16.102412   76616 main.go:141] libmachine: Using API Version  1
	I0311 21:55:16.102439   76616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:55:16.102738   76616 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:55:16.102869   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetState
	I0311 21:55:16.104266   76616 main.go:141] libmachine: (newest-cni-649653) Calling .DriverName
	I0311 21:55:16.104519   76616 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:55:16.104534   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:55:16.104549   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHHostname
	I0311 21:55:16.107457   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:16.107929   76616 main.go:141] libmachine: (newest-cni-649653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e6:a4", ip: ""} in network mk-newest-cni-649653: {Iface:virbr3 ExpiryTime:2024-03-11 22:54:52 +0000 UTC Type:0 Mac:52:54:00:de:e6:a4 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:newest-cni-649653 Clientid:01:52:54:00:de:e6:a4}
	I0311 21:55:16.108032   76616 main.go:141] libmachine: (newest-cni-649653) DBG | domain newest-cni-649653 has defined IP address 192.168.72.200 and MAC address 52:54:00:de:e6:a4 in network mk-newest-cni-649653
	I0311 21:55:16.108291   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHPort
	I0311 21:55:16.108429   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHKeyPath
	I0311 21:55:16.108574   76616 main.go:141] libmachine: (newest-cni-649653) Calling .GetSSHUsername
	I0311 21:55:16.108696   76616 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/newest-cni-649653/id_rsa Username:docker}
	I0311 21:55:16.303020   76616 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:55:16.361996   76616 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:55:16.362108   76616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:55:16.416114   76616 api_server.go:72] duration metric: took 390.308513ms to wait for apiserver process to appear ...
	I0311 21:55:16.416140   76616 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:55:16.416159   76616 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0311 21:55:16.446482   76616 api_server.go:279] https://192.168.72.200:8443/healthz returned 200:
	ok
	I0311 21:55:16.449546   76616 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:55:16.449567   76616 api_server.go:131] duration metric: took 33.419437ms to wait for apiserver health ...
	I0311 21:55:16.449577   76616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:55:16.460262   76616 system_pods.go:59] 8 kube-system pods found
	I0311 21:55:16.460284   76616 system_pods.go:61] "coredns-76f75df574-688gg" [0b3d26ae-e36c-437a-bcad-c7e8fa26a07b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:55:16.460296   76616 system_pods.go:61] "etcd-newest-cni-649653" [0165fccf-11d5-4ee3-a496-5fda099385d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:55:16.460310   76616 system_pods.go:61] "kube-apiserver-newest-cni-649653" [e538de26-96b2-4028-afa3-2a78f71fa1c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:55:16.460316   76616 system_pods.go:61] "kube-controller-manager-newest-cni-649653" [8cafd132-158b-4b13-9a6c-ef4ff4c346cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:55:16.460324   76616 system_pods.go:61] "kube-proxy-bjqff" [1dd10b77-aa4f-48ba-bef8-6f60ff15b2a6] Running
	I0311 21:55:16.460328   76616 system_pods.go:61] "kube-scheduler-newest-cni-649653" [2b449005-d1ea-4676-ba7a-4f36e7bf1bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:55:16.460333   76616 system_pods.go:61] "metrics-server-57f55c9bc5-vrmgs" [43d932a1-a65f-47ff-a404-75b979b6ac84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:55:16.460339   76616 system_pods.go:61] "storage-provisioner" [452cbfa8-db06-40fe-a5d3-d7dd34269448] Running
	I0311 21:55:16.460344   76616 system_pods.go:74] duration metric: took 10.762738ms to wait for pod list to return data ...
	I0311 21:55:16.460354   76616 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:55:16.468051   76616 default_sa.go:45] found service account: "default"
	I0311 21:55:16.468073   76616 default_sa.go:55] duration metric: took 7.713841ms for default service account to be created ...
	I0311 21:55:16.468085   76616 kubeadm.go:576] duration metric: took 442.284158ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 21:55:16.468108   76616 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:55:16.469140   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0311 21:55:16.469159   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0311 21:55:16.475240   76616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:55:16.478984   76616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:55:16.479009   76616 node_conditions.go:123] node cpu capacity is 2
	I0311 21:55:16.479044   76616 node_conditions.go:105] duration metric: took 10.905317ms to run NodePressure ...
	I0311 21:55:16.479062   76616 start.go:240] waiting for startup goroutines ...
	I0311 21:55:16.509513   76616 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:55:16.509532   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:55:16.521814   76616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:55:16.537461   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0311 21:55:16.537484   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0311 21:55:16.576776   76616 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:55:16.576799   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:55:16.617771   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0311 21:55:16.617800   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0311 21:55:16.655127   76616 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:55:16.655169   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:55:16.681349   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0311 21:55:16.681376   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0311 21:55:16.708172   76616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:55:16.737189   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0311 21:55:16.737210   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0311 21:55:16.804030   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0311 21:55:16.804066   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0311 21:55:16.899786   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0311 21:55:16.899808   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0311 21:55:16.911552   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:16.911576   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:16.911852   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:16.911870   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:16.911886   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:16.911894   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:16.912140   76616 main.go:141] libmachine: (newest-cni-649653) DBG | Closing plugin on server side
	I0311 21:55:16.912151   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:16.912165   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:16.918928   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:16.918944   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:16.919189   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:16.919204   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:16.919214   76616 main.go:141] libmachine: (newest-cni-649653) DBG | Closing plugin on server side
	I0311 21:55:16.941701   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0311 21:55:16.941727   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0311 21:55:16.970951   76616 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0311 21:55:16.970976   76616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0311 21:55:17.026261   76616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0311 21:55:17.950116   76616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.428263935s)
	I0311 21:55:17.950173   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:17.950189   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:17.950485   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:17.950504   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:17.950513   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:17.950521   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:17.950728   76616 main.go:141] libmachine: (newest-cni-649653) DBG | Closing plugin on server side
	I0311 21:55:17.950751   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:17.950758   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:18.036085   76616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.327878551s)
	I0311 21:55:18.036144   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:18.036158   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:18.036460   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:18.036488   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:18.036499   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:18.036507   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:18.036749   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:18.036769   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:18.036780   76616 addons.go:470] Verifying addon metrics-server=true in "newest-cni-649653"
	I0311 21:55:18.343581   76616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.317273167s)
	I0311 21:55:18.343645   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:18.343673   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:18.344023   76616 main.go:141] libmachine: (newest-cni-649653) DBG | Closing plugin on server side
	I0311 21:55:18.344080   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:18.344099   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:18.344114   76616 main.go:141] libmachine: Making call to close driver server
	I0311 21:55:18.344126   76616 main.go:141] libmachine: (newest-cni-649653) Calling .Close
	I0311 21:55:18.344342   76616 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:55:18.344355   76616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:55:18.345964   76616 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-649653 addons enable metrics-server
	
	I0311 21:55:18.347286   76616 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0311 21:55:18.348609   76616 addons.go:505] duration metric: took 2.32276825s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0311 21:55:18.348635   76616 start.go:245] waiting for cluster config update ...
	I0311 21:55:18.348646   76616 start.go:254] writing updated cluster config ...
	I0311 21:55:18.348881   76616 ssh_runner.go:195] Run: rm -f paused
	I0311 21:55:18.407900   76616 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:55:18.409689   76616 out.go:177] * Done! kubectl is now configured to use "newest-cni-649653" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.113106120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194122113074256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1648fe0b-e889-4d45-b641-025d24edf424 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.113808516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8fdb011-30c7-4818-b791-abc2272b2736 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.113921522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8fdb011-30c7-4818-b791-abc2272b2736 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.114350717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb,PodSandboxId:1273cadfcc0af0128e40db4cc1aec0cf4d6b4e647ea1dc825630b79b5fe59a67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710193233737274617,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d4992a-803a-4064-b372-6ba9729bd2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 40dbf215,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff,PodSandboxId:57af43447b2b9ed98db403cc8c1acb7988045092f92f4e0c3def870fa0e2870f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193232074460457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qdcdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f100559-2b0a-4068-a3e7-475b5865a1d9,},Annotations:map[string]string{io.kubernetes.container.hash: 83254e48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9,PodSandboxId:c0b3aa5425dbf6a5e2d5d4a9babf54d2d68309733021f8c13a8055bb592981a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710193231622845396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4fwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b82ae7c-bffe-4fe4-b38c-3a789654df85,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7e889e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48,PodSandboxId:50b60fb7a7ec426aa08b804221ab2f1b361a3d378261ccc76c6ab8046c6fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193231917074606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kxjhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09678270-80f4-4bde-8080-
3a3a41ecb356,},Annotations:map[string]string{io.kubernetes.container.hash: 617d4e5e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406,PodSandboxId:f390039e37629f5d8df6f629009fd268d878278943426cd7419cacf42bfe0191,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171019321192518399
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f656d1b2a083ea3def41c157e42d64,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab,PodSandboxId:3d46c70fa47641c5b2a82cdc33f2b75a350f71a455d6d36f97913e79f6cd08b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193211864499246,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275286ceaa417bed6e079bc90d20c67f,},Annotations:map[string]string{io.kubernetes.container.hash: e52a60d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef,PodSandboxId:0591ee40586bdc0b3889628144b7e44bfa75ec5f170c66327354ee4b599957f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193211911941771,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547fe021a44521b4b353ab08995030b9,},Annotations:map[string]string{io.kubernetes.container.hash: be84fa1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597,PodSandboxId:83583f2ee62f5196d5006b51b95176333e9400ab7405bb1f18a001b46ab6b834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193211784281349,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a60ab38660991dda736a8865454b52c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8fdb011-30c7-4818-b791-abc2272b2736 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.159405886Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f2c967e-a8fa-446b-9719-98eca8be1b8d name=/runtime.v1.RuntimeService/Version
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.159504518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f2c967e-a8fa-446b-9719-98eca8be1b8d name=/runtime.v1.RuntimeService/Version
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.162819769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f0884ad-5d88-49d9-844e-4ffbd735d088 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.163732471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194122163281837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f0884ad-5d88-49d9-844e-4ffbd735d088 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.164545157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29c3837d-d69f-4619-a537-0c396eb2019a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.164595628Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29c3837d-d69f-4619-a537-0c396eb2019a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.164829761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb,PodSandboxId:1273cadfcc0af0128e40db4cc1aec0cf4d6b4e647ea1dc825630b79b5fe59a67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710193233737274617,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d4992a-803a-4064-b372-6ba9729bd2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 40dbf215,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff,PodSandboxId:57af43447b2b9ed98db403cc8c1acb7988045092f92f4e0c3def870fa0e2870f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193232074460457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qdcdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f100559-2b0a-4068-a3e7-475b5865a1d9,},Annotations:map[string]string{io.kubernetes.container.hash: 83254e48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9,PodSandboxId:c0b3aa5425dbf6a5e2d5d4a9babf54d2d68309733021f8c13a8055bb592981a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710193231622845396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4fwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b82ae7c-bffe-4fe4-b38c-3a789654df85,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7e889e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48,PodSandboxId:50b60fb7a7ec426aa08b804221ab2f1b361a3d378261ccc76c6ab8046c6fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193231917074606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kxjhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09678270-80f4-4bde-8080-
3a3a41ecb356,},Annotations:map[string]string{io.kubernetes.container.hash: 617d4e5e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406,PodSandboxId:f390039e37629f5d8df6f629009fd268d878278943426cd7419cacf42bfe0191,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171019321192518399
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f656d1b2a083ea3def41c157e42d64,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab,PodSandboxId:3d46c70fa47641c5b2a82cdc33f2b75a350f71a455d6d36f97913e79f6cd08b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193211864499246,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275286ceaa417bed6e079bc90d20c67f,},Annotations:map[string]string{io.kubernetes.container.hash: e52a60d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef,PodSandboxId:0591ee40586bdc0b3889628144b7e44bfa75ec5f170c66327354ee4b599957f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193211911941771,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547fe021a44521b4b353ab08995030b9,},Annotations:map[string]string{io.kubernetes.container.hash: be84fa1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597,PodSandboxId:83583f2ee62f5196d5006b51b95176333e9400ab7405bb1f18a001b46ab6b834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193211784281349,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a60ab38660991dda736a8865454b52c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29c3837d-d69f-4619-a537-0c396eb2019a name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.208156187Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26118c3b-fc67-4a21-b819-18e1eb653384 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.208253663Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26118c3b-fc67-4a21-b819-18e1eb653384 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.209730231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb7cd8f9-6d44-4403-879f-be3d68464fec name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.210395604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194122210372380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb7cd8f9-6d44-4403-879f-be3d68464fec name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.210959851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6be476de-9930-487a-b34d-1c153718bf52 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.211108772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6be476de-9930-487a-b34d-1c153718bf52 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.211323362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb,PodSandboxId:1273cadfcc0af0128e40db4cc1aec0cf4d6b4e647ea1dc825630b79b5fe59a67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710193233737274617,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d4992a-803a-4064-b372-6ba9729bd2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 40dbf215,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff,PodSandboxId:57af43447b2b9ed98db403cc8c1acb7988045092f92f4e0c3def870fa0e2870f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193232074460457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qdcdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f100559-2b0a-4068-a3e7-475b5865a1d9,},Annotations:map[string]string{io.kubernetes.container.hash: 83254e48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9,PodSandboxId:c0b3aa5425dbf6a5e2d5d4a9babf54d2d68309733021f8c13a8055bb592981a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710193231622845396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4fwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b82ae7c-bffe-4fe4-b38c-3a789654df85,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7e889e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48,PodSandboxId:50b60fb7a7ec426aa08b804221ab2f1b361a3d378261ccc76c6ab8046c6fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193231917074606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kxjhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09678270-80f4-4bde-8080-
3a3a41ecb356,},Annotations:map[string]string{io.kubernetes.container.hash: 617d4e5e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406,PodSandboxId:f390039e37629f5d8df6f629009fd268d878278943426cd7419cacf42bfe0191,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171019321192518399
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f656d1b2a083ea3def41c157e42d64,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab,PodSandboxId:3d46c70fa47641c5b2a82cdc33f2b75a350f71a455d6d36f97913e79f6cd08b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193211864499246,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275286ceaa417bed6e079bc90d20c67f,},Annotations:map[string]string{io.kubernetes.container.hash: e52a60d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef,PodSandboxId:0591ee40586bdc0b3889628144b7e44bfa75ec5f170c66327354ee4b599957f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193211911941771,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547fe021a44521b4b353ab08995030b9,},Annotations:map[string]string{io.kubernetes.container.hash: be84fa1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597,PodSandboxId:83583f2ee62f5196d5006b51b95176333e9400ab7405bb1f18a001b46ab6b834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193211784281349,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a60ab38660991dda736a8865454b52c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6be476de-9930-487a-b34d-1c153718bf52 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.248515267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64de0fbc-64ed-4c4a-9c2e-974e9ebb9bd1 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.248604534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64de0fbc-64ed-4c4a-9c2e-974e9ebb9bd1 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.250342868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cfe4394-9551-4d69-ae3d-63f74269bf36 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.250709540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194122250690945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cfe4394-9551-4d69-ae3d-63f74269bf36 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.251380376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68d57013-725e-406f-8c95-8a24a1741c4f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.251431010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68d57013-725e-406f-8c95-8a24a1741c4f name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:55:22 default-k8s-diff-port-766430 crio[694]: time="2024-03-11 21:55:22.251611330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb,PodSandboxId:1273cadfcc0af0128e40db4cc1aec0cf4d6b4e647ea1dc825630b79b5fe59a67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710193233737274617,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1d4992a-803a-4064-b372-6ba9729bd2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 40dbf215,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff,PodSandboxId:57af43447b2b9ed98db403cc8c1acb7988045092f92f4e0c3def870fa0e2870f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193232074460457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qdcdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f100559-2b0a-4068-a3e7-475b5865a1d9,},Annotations:map[string]string{io.kubernetes.container.hash: 83254e48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9,PodSandboxId:c0b3aa5425dbf6a5e2d5d4a9babf54d2d68309733021f8c13a8055bb592981a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710193231622845396,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4fwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b82ae7c-bffe-4fe4-b38c-3a789654df85,},Annotations:map[string]string{io.kubernetes.container.hash: 6d7e889e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48,PodSandboxId:50b60fb7a7ec426aa08b804221ab2f1b361a3d378261ccc76c6ab8046c6fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710193231917074606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kxjhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09678270-80f4-4bde-8080-
3a3a41ecb356,},Annotations:map[string]string{io.kubernetes.container.hash: 617d4e5e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406,PodSandboxId:f390039e37629f5d8df6f629009fd268d878278943426cd7419cacf42bfe0191,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171019321192518399
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84f656d1b2a083ea3def41c157e42d64,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab,PodSandboxId:3d46c70fa47641c5b2a82cdc33f2b75a350f71a455d6d36f97913e79f6cd08b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710193211864499246,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275286ceaa417bed6e079bc90d20c67f,},Annotations:map[string]string{io.kubernetes.container.hash: e52a60d5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef,PodSandboxId:0591ee40586bdc0b3889628144b7e44bfa75ec5f170c66327354ee4b599957f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710193211911941771,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 547fe021a44521b4b353ab08995030b9,},Annotations:map[string]string{io.kubernetes.container.hash: be84fa1e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597,PodSandboxId:83583f2ee62f5196d5006b51b95176333e9400ab7405bb1f18a001b46ab6b834,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710193211784281349,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-766430,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a60ab38660991dda736a8865454b52c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68d57013-725e-406f-8c95-8a24a1741c4f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a57a390f8c15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   1273cadfcc0af       storage-provisioner
	abf1b35d40e2a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   57af43447b2b9       coredns-5dd5756b68-qdcdw
	7781ea4a1ef60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   50b60fb7a7ec4       coredns-5dd5756b68-kxjhf
	5f89c052cdef5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   c0b3aa5425dbf       kube-proxy-t4fwc
	dd5fb8d4fec27       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   f390039e37629       kube-scheduler-default-k8s-diff-port-766430
	3c1dc225baf7b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   0591ee40586bd       kube-apiserver-default-k8s-diff-port-766430
	5f93f14553d94       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   3d46c70fa4764       etcd-default-k8s-diff-port-766430
	63bbf59add0cd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   83583f2ee62f5       kube-controller-manager-default-k8s-diff-port-766430
	
	
	==> coredns [7781ea4a1ef60f3943016af578d7da74e77b05a668eda9c9ad9cbbf897197e48] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [abf1b35d40e2aad6c0963e020c8855ec3699d0921a2ae87765573c077446c0ff] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-766430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-766430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=default-k8s-diff-port-766430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T21_40_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 21:40:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-766430
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:55:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 21:50:52 +0000   Mon, 11 Mar 2024 21:40:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 21:50:52 +0000   Mon, 11 Mar 2024 21:40:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 21:50:52 +0000   Mon, 11 Mar 2024 21:40:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 21:50:52 +0000   Mon, 11 Mar 2024 21:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.11
	  Hostname:    default-k8s-diff-port-766430
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 68d6742a9a424b7182e2499f72626db5
	  System UUID:                68d6742a-9a42-4b71-82e2-499f72626db5
	  Boot ID:                    3effb575-f6a2-493a-bef9-c4a2015cfb66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-kxjhf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5dd5756b68-qdcdw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-766430                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-766430             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-766430    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-t4fwc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-766430             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-9slpq                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-766430 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node default-k8s-diff-port-766430 event: Registered Node default-k8s-diff-port-766430 in Controller
	
	
	==> dmesg <==
	[  +0.063050] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050568] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.028029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.527508] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.730561] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar11 21:35] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.064178] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070096] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.189536] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.159773] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.282150] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +5.558152] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +0.069754] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.054410] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +5.674994] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.077885] kauditd_printk_skb: 74 callbacks suppressed
	[Mar11 21:40] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.424411] systemd-fstab-generator[3413]: Ignoring "noauto" option for root device
	[  +7.763340] systemd-fstab-generator[3731]: Ignoring "noauto" option for root device
	[  +0.078979] kauditd_printk_skb: 55 callbacks suppressed
	[ +12.402976] systemd-fstab-generator[3918]: Ignoring "noauto" option for root device
	[  +0.107452] kauditd_printk_skb: 12 callbacks suppressed
	[Mar11 21:41] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [5f93f14553d942a939cbde0380ab131f837857eb114ee9e8c490b7783f6829ab] <==
	{"level":"info","ts":"2024-03-11T21:40:13.070213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 received MsgPreVoteResp from 2895711bae57da21 at term 1"}
	{"level":"info","ts":"2024-03-11T21:40:13.070247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T21:40:13.070279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 received MsgVoteResp from 2895711bae57da21 at term 2"}
	{"level":"info","ts":"2024-03-11T21:40:13.070306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2895711bae57da21 became leader at term 2"}
	{"level":"info","ts":"2024-03-11T21:40:13.07033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2895711bae57da21 elected leader 2895711bae57da21 at term 2"}
	{"level":"info","ts":"2024-03-11T21:40:13.075167Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:40:13.079193Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb6e72b45dde42f9","local-member-id":"2895711bae57da21","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:40:13.07932Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:40:13.079357Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T21:40:13.079393Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2895711bae57da21","local-member-attributes":"{Name:default-k8s-diff-port-766430 ClientURLs:[https://192.168.61.11:2379]}","request-path":"/0/members/2895711bae57da21/attributes","cluster-id":"fb6e72b45dde42f9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T21:40:13.079537Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:40:13.08079Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T21:40:13.083165Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T21:40:13.084134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.11:2379"}
	{"level":"info","ts":"2024-03-11T21:40:13.086067Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T21:40:13.086118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T21:50:13.130218Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":707}
	{"level":"info","ts":"2024-03-11T21:50:13.132804Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":707,"took":"2.058944ms","hash":3510740585}
	{"level":"info","ts":"2024-03-11T21:50:13.132927Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3510740585,"revision":707,"compact-revision":-1}
	{"level":"info","ts":"2024-03-11T21:55:09.111265Z","caller":"traceutil/trace.go:171","msg":"trace[1151777553] transaction","detail":"{read_only:false; response_revision:1191; number_of_response:1; }","duration":"145.472164ms","start":"2024-03-11T21:55:08.965746Z","end":"2024-03-11T21:55:09.111218Z","steps":["trace[1151777553] 'process raft request'  (duration: 145.384484ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T21:55:09.340645Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.053323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-11T21:55:09.340771Z","caller":"traceutil/trace.go:171","msg":"trace[602857649] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1191; }","duration":"103.238964ms","start":"2024-03-11T21:55:09.237507Z","end":"2024-03-11T21:55:09.340746Z","steps":["trace[602857649] 'range keys from in-memory index tree'  (duration: 102.967575ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T21:55:13.138368Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":950}
	{"level":"info","ts":"2024-03-11T21:55:13.139844Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":950,"took":"1.212611ms","hash":1323042631}
	{"level":"info","ts":"2024-03-11T21:55:13.13992Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1323042631,"revision":950,"compact-revision":707}
	
	
	==> kernel <==
	 21:55:22 up 20 min,  0 users,  load average: 0.27, 0.21, 0.18
	Linux default-k8s-diff-port-766430 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3c1dc225baf7b7994d343081e18c14986400e0ec8dc0dcef6ed399b0b73cd0ef] <==
	E0311 21:51:16.148119       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:51:16.148157       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:52:15.036855       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 21:53:15.036685       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:53:16.147746       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:53:16.147865       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:53:16.147877       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0311 21:53:16.149089       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:53:16.149220       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:53:16.149264       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0311 21:54:15.036266       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 21:55:15.036335       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:55:15.152696       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:55:15.152856       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:55:15.154178       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0311 21:55:16.153637       1 handler_proxy.go:93] no RequestInfo found in the context
	W0311 21:55:16.153695       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 21:55:16.153898       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 21:55:16.153936       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0311 21:55:16.153782       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0311 21:55:16.155189       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [63bbf59add0cd484021beb1ca1cdecdb07dac9b07140a70d3de3db131512b597] <==
	I0311 21:49:30.728926       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:50:00.128712       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:50:00.738215       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:50:30.135299       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:50:30.746453       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:51:00.141273       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:51:00.756748       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:51:30.147509       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:51:30.765532       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0311 21:51:33.827718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="651.891µs"
	I0311 21:51:44.831235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="278.033µs"
	E0311 21:52:00.157069       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:52:00.775795       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:52:30.172599       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:52:30.785103       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:53:00.180243       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:53:00.798561       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:53:30.190754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:53:30.809191       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:54:00.199164       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:54:00.819393       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:54:30.207507       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:54:30.830970       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0311 21:55:00.213098       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0311 21:55:00.841939       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5f89c052cdef56de31184aa7da6faea46dbfe77a74e27b0aa35ab7c4b2ab05e9] <==
	I0311 21:40:32.661526       1 server_others.go:69] "Using iptables proxy"
	I0311 21:40:32.690174       1 node.go:141] Successfully retrieved node IP: 192.168.61.11
	I0311 21:40:32.839195       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 21:40:32.839248       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 21:40:32.841774       1 server_others.go:152] "Using iptables Proxier"
	I0311 21:40:32.849519       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 21:40:32.850086       1 server.go:846] "Version info" version="v1.28.4"
	I0311 21:40:32.850166       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 21:40:32.852381       1 config.go:188] "Starting service config controller"
	I0311 21:40:32.852895       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 21:40:32.852925       1 config.go:97] "Starting endpoint slice config controller"
	I0311 21:40:32.852930       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 21:40:32.854659       1 config.go:315] "Starting node config controller"
	I0311 21:40:32.854701       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 21:40:32.953120       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 21:40:32.953185       1 shared_informer.go:318] Caches are synced for service config
	I0311 21:40:32.969371       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [dd5fb8d4fec270301e1152ec332841bc8c4807a9d43b27868701ad36da0e6406] <==
	E0311 21:40:15.240108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 21:40:15.240546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 21:40:15.240714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 21:40:15.241103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 21:40:15.241250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 21:40:15.241586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 21:40:15.226115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 21:40:15.243891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 21:40:15.244585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 21:40:15.244322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 21:40:16.147598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 21:40:16.147693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 21:40:16.195326       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 21:40:16.195378       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 21:40:16.273959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 21:40:16.274154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 21:40:16.346703       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 21:40:16.346811       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 21:40:16.354906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 21:40:16.355090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 21:40:16.400135       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 21:40:16.400256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 21:40:16.485580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 21:40:16.485672       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0311 21:40:18.092143       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 21:53:12 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:53:12.811191    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:53:18 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:53:18.873558    3738 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:53:18 default-k8s-diff-port-766430 kubelet[3738]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:53:18 default-k8s-diff-port-766430 kubelet[3738]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:53:18 default-k8s-diff-port-766430 kubelet[3738]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:53:18 default-k8s-diff-port-766430 kubelet[3738]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:53:27 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:53:27.811348    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:53:42 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:53:42.810831    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:53:53 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:53:53.810499    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:54:08 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:54:08.811162    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:54:18 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:54:18.874883    3738 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:54:18 default-k8s-diff-port-766430 kubelet[3738]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:54:18 default-k8s-diff-port-766430 kubelet[3738]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:54:18 default-k8s-diff-port-766430 kubelet[3738]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:54:18 default-k8s-diff-port-766430 kubelet[3738]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 11 21:54:22 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:54:22.811135    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:54:34 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:54:34.811124    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:54:45 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:54:45.810043    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:55:00 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:55:00.810445    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:55:13 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:55:13.809835    3738 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9slpq" podUID="ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18"
	Mar 11 21:55:18 default-k8s-diff-port-766430 kubelet[3738]: E0311 21:55:18.875758    3738 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 11 21:55:18 default-k8s-diff-port-766430 kubelet[3738]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 11 21:55:18 default-k8s-diff-port-766430 kubelet[3738]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 11 21:55:18 default-k8s-diff-port-766430 kubelet[3738]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 11 21:55:18 default-k8s-diff-port-766430 kubelet[3738]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [8a57a390f8c15987fbe43e51210a9873f7724bd1e7ad40933410a29f2b3407cb] <==
	I0311 21:40:33.886051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 21:40:33.902810       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 21:40:33.902884       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 21:40:33.917132       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 21:40:33.917409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766430_dd940636-24d5-4105-81b4-842f67ac10d7!
	I0311 21:40:33.919368       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbc8c8b6-2640-4db0-907a-adf39a31a724", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-766430_dd940636-24d5-4105-81b4-842f67ac10d7 became leader
	I0311 21:40:34.018249       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-766430_dd940636-24d5-4105-81b4-842f67ac10d7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-766430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9slpq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-766430 describe pod metrics-server-57f55c9bc5-9slpq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-766430 describe pod metrics-server-57f55c9bc5-9slpq: exit status 1 (57.819374ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9slpq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-766430 describe pod metrics-server-57f55c9bc5-9slpq: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (344.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (88.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:52:37.144881   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/calico-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:52:38.935263   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
E0311 21:52:55.427498   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/custom-flannel-427678/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.52:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.52:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 2 (246.10013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-239315" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-239315 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-239315 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.425µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-239315 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 2 (239.612929ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-239315 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-239315 logs -n 25: (1.531260848s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-427678 sudo cat                              | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo                                  | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo find                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-427678 sudo crio                             | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-427678                                       | bridge-427678                | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-124446 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:25 UTC |
	|         | disable-driver-mounts-124446                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:25 UTC | 11 Mar 24 21:26 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-766430  | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-324578             | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-743937            | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC | 11 Mar 24 21:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239315        | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-766430       | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-324578                  | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-766430 | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:40 UTC |
	|         | default-k8s-diff-port-766430                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-324578                                   | no-preload-324578            | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-743937                 | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-743937                                  | embed-certs-743937           | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:29 UTC | 11 Mar 24 21:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239315             | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC | 11 Mar 24 21:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239315                              | old-k8s-version-239315       | jenkins | v1.32.0 | 11 Mar 24 21:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 21:30:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 21:30:01.044166   70908 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:30:01.044254   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044259   70908 out.go:304] Setting ErrFile to fd 2...
	I0311 21:30:01.044263   70908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:30:01.044451   70908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:30:01.044970   70908 out.go:298] Setting JSON to false
	I0311 21:30:01.045838   70908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7950,"bootTime":1710184651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:30:01.045894   70908 start.go:139] virtualization: kvm guest
	I0311 21:30:01.048311   70908 out.go:177] * [old-k8s-version-239315] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:30:01.050003   70908 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:30:01.050011   70908 notify.go:220] Checking for updates...
	I0311 21:30:01.051498   70908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:30:01.052999   70908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:30:01.054439   70908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:30:01.055768   70908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:30:01.057137   70908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:30:01.058760   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:30:01.059167   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.059205   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.073734   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0311 21:30:01.074087   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.074586   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.074618   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.074966   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.075173   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.077005   70908 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 21:30:01.078583   70908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:30:01.078879   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:30:01.078914   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:30:01.093226   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0311 21:30:01.093614   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:30:01.094174   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:30:01.094243   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:30:01.094616   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:30:01.094805   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:30:01.128302   70908 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 21:30:01.129965   70908 start.go:297] selected driver: kvm2
	I0311 21:30:01.129991   70908 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.130113   70908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:30:01.131050   70908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.131115   70908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 21:30:01.145452   70908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 21:30:01.145782   70908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:30:01.145811   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:30:01.145819   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:30:01.145863   70908 start.go:340] cluster config:
	{Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:30:01.145954   70908 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 21:30:01.147725   70908 out.go:177] * Starting "old-k8s-version-239315" primary control-plane node in "old-k8s-version-239315" cluster
	I0311 21:30:01.148916   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:30:01.148943   70908 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 21:30:01.148955   70908 cache.go:56] Caching tarball of preloaded images
	I0311 21:30:01.149022   70908 preload.go:173] Found /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0311 21:30:01.149032   70908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0311 21:30:01.149114   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:30:01.149263   70908 start.go:360] acquireMachinesLock for old-k8s-version-239315: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:30:05.352968   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:08.425086   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:14.504922   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:17.577080   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:23.656996   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:26.729009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:32.809042   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:35.881008   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:41.960992   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:45.033096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:51.112925   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:30:54.184989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:00.265058   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:03.337012   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:09.416960   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:12.489005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:18.569021   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:21.640990   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:27.721019   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:30.793040   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:36.872985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:39.945005   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:46.025035   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:49.096988   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:55.176985   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:31:58.249009   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:04.328981   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:07.401006   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:13.480986   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:16.552965   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:22.632997   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:25.705064   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:31.784993   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:34.857027   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:40.937002   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:44.008989   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:50.088959   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:53.161092   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:32:59.241045   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:02.313084   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:08.393056   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:11.465079   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:17.545057   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:20.617082   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:26.697000   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:29.768926   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:35.849024   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:38.921096   70417 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.11:22: connect: no route to host
	I0311 21:33:41.925305   70458 start.go:364] duration metric: took 4m36.419231792s to acquireMachinesLock for "no-preload-324578"
	I0311 21:33:41.925360   70458 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:33:41.925368   70458 fix.go:54] fixHost starting: 
	I0311 21:33:41.925768   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:33:41.925798   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:33:41.940654   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0311 21:33:41.941130   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:33:41.941619   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:33:41.941646   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:33:41.942045   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:33:41.942209   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:33:41.942370   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:33:41.944009   70458 fix.go:112] recreateIfNeeded on no-preload-324578: state=Stopped err=<nil>
	I0311 21:33:41.944030   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	W0311 21:33:41.944231   70458 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:33:41.946020   70458 out.go:177] * Restarting existing kvm2 VM for "no-preload-324578" ...
	I0311 21:33:41.922711   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:33:41.922754   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923131   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:33:41.923158   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:33:41.923430   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:33:41.925178   70417 machine.go:97] duration metric: took 4m37.414792129s to provisionDockerMachine
	I0311 21:33:41.925213   70417 fix.go:56] duration metric: took 4m37.435982654s for fixHost
	I0311 21:33:41.925219   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 4m37.436000925s
	W0311 21:33:41.925242   70417 start.go:713] error starting host: provision: host is not running
	W0311 21:33:41.925330   70417 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0311 21:33:41.925343   70417 start.go:728] Will try again in 5 seconds ...
	I0311 21:33:41.947495   70458 main.go:141] libmachine: (no-preload-324578) Calling .Start
	I0311 21:33:41.947676   70458 main.go:141] libmachine: (no-preload-324578) Ensuring networks are active...
	I0311 21:33:41.948386   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network default is active
	I0311 21:33:41.948724   70458 main.go:141] libmachine: (no-preload-324578) Ensuring network mk-no-preload-324578 is active
	I0311 21:33:41.949117   70458 main.go:141] libmachine: (no-preload-324578) Getting domain xml...
	I0311 21:33:41.949876   70458 main.go:141] libmachine: (no-preload-324578) Creating domain...
	I0311 21:33:43.129733   70458 main.go:141] libmachine: (no-preload-324578) Waiting to get IP...
	I0311 21:33:43.130601   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.131006   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.131053   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.130975   71444 retry.go:31] will retry after 209.203314ms: waiting for machine to come up
	I0311 21:33:43.341724   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.342324   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.342361   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.342279   71444 retry.go:31] will retry after 375.396917ms: waiting for machine to come up
	I0311 21:33:43.718906   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:43.719329   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:43.719351   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:43.719288   71444 retry.go:31] will retry after 428.365393ms: waiting for machine to come up
	I0311 21:33:44.148895   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.149334   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.149358   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.149284   71444 retry.go:31] will retry after 561.478535ms: waiting for machine to come up
	I0311 21:33:44.712065   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:44.712548   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:44.712576   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:44.712465   71444 retry.go:31] will retry after 700.993236ms: waiting for machine to come up
	I0311 21:33:46.926379   70417 start.go:360] acquireMachinesLock for default-k8s-diff-port-766430: {Name:mk92e5668ffdba05ab9d8973476f5480b3d3956c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 21:33:45.415695   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:45.416242   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:45.416276   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:45.416215   71444 retry.go:31] will retry after 809.474202ms: waiting for machine to come up
	I0311 21:33:46.227098   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:46.227573   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:46.227608   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:46.227520   71444 retry.go:31] will retry after 1.075187328s: waiting for machine to come up
	I0311 21:33:47.303981   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:47.304454   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:47.304483   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:47.304397   71444 retry.go:31] will retry after 1.145290319s: waiting for machine to come up
	I0311 21:33:48.451871   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:48.452316   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:48.452350   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:48.452267   71444 retry.go:31] will retry after 1.172261063s: waiting for machine to come up
	I0311 21:33:49.626502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:49.627067   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:49.627089   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:49.627023   71444 retry.go:31] will retry after 2.201479026s: waiting for machine to come up
	I0311 21:33:51.831519   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:51.831972   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:51.832008   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:51.831905   71444 retry.go:31] will retry after 2.888101699s: waiting for machine to come up
	I0311 21:33:54.721322   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:54.721753   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:54.721773   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:54.721722   71444 retry.go:31] will retry after 3.512655296s: waiting for machine to come up
	I0311 21:33:58.235767   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:33:58.236180   70458 main.go:141] libmachine: (no-preload-324578) DBG | unable to find current IP address of domain no-preload-324578 in network mk-no-preload-324578
	I0311 21:33:58.236219   70458 main.go:141] libmachine: (no-preload-324578) DBG | I0311 21:33:58.236141   71444 retry.go:31] will retry after 3.975760652s: waiting for machine to come up
	I0311 21:34:03.525918   70604 start.go:364] duration metric: took 4m44.449252209s to acquireMachinesLock for "embed-certs-743937"
	I0311 21:34:03.525995   70604 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:03.526008   70604 fix.go:54] fixHost starting: 
	I0311 21:34:03.526428   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:03.526470   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:03.542427   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0311 21:34:03.542857   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:03.543292   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:34:03.543317   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:03.543616   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:03.543806   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:03.543991   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:34:03.545366   70604 fix.go:112] recreateIfNeeded on embed-certs-743937: state=Stopped err=<nil>
	I0311 21:34:03.545391   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	W0311 21:34:03.545540   70604 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:03.548158   70604 out.go:177] * Restarting existing kvm2 VM for "embed-certs-743937" ...
	I0311 21:34:03.549803   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Start
	I0311 21:34:03.549966   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring networks are active...
	I0311 21:34:03.550712   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network default is active
	I0311 21:34:03.551124   70604 main.go:141] libmachine: (embed-certs-743937) Ensuring network mk-embed-certs-743937 is active
	I0311 21:34:03.551528   70604 main.go:141] libmachine: (embed-certs-743937) Getting domain xml...
	I0311 21:34:03.552226   70604 main.go:141] libmachine: (embed-certs-743937) Creating domain...
	I0311 21:34:02.213709   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214152   70458 main.go:141] libmachine: (no-preload-324578) Found IP for machine: 192.168.39.36
	I0311 21:34:02.214181   70458 main.go:141] libmachine: (no-preload-324578) Reserving static IP address...
	I0311 21:34:02.214196   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has current primary IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.214631   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.214655   70458 main.go:141] libmachine: (no-preload-324578) DBG | skip adding static IP to network mk-no-preload-324578 - found existing host DHCP lease matching {name: "no-preload-324578", mac: "52:54:00:00:fc:98", ip: "192.168.39.36"}
	I0311 21:34:02.214666   70458 main.go:141] libmachine: (no-preload-324578) Reserved static IP address: 192.168.39.36
	I0311 21:34:02.214680   70458 main.go:141] libmachine: (no-preload-324578) Waiting for SSH to be available...
	I0311 21:34:02.214704   70458 main.go:141] libmachine: (no-preload-324578) DBG | Getting to WaitForSSH function...
	I0311 21:34:02.216798   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217068   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.217111   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.217285   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH client type: external
	I0311 21:34:02.217316   70458 main.go:141] libmachine: (no-preload-324578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa (-rw-------)
	I0311 21:34:02.217356   70458 main.go:141] libmachine: (no-preload-324578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:02.217374   70458 main.go:141] libmachine: (no-preload-324578) DBG | About to run SSH command:
	I0311 21:34:02.217389   70458 main.go:141] libmachine: (no-preload-324578) DBG | exit 0
	I0311 21:34:02.340837   70458 main.go:141] libmachine: (no-preload-324578) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:02.341154   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetConfigRaw
	I0311 21:34:02.341752   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.344368   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344756   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.344791   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.344942   70458 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/config.json ...
	I0311 21:34:02.345142   70458 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:02.345159   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:02.345353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.347647   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348001   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.348029   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.348118   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.348284   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348432   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.348548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.348704   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.348913   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.348925   70458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:02.457273   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:02.457298   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457523   70458 buildroot.go:166] provisioning hostname "no-preload-324578"
	I0311 21:34:02.457554   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.457757   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.460347   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460658   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.460688   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.460913   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.461126   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461286   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.461415   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.461574   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.461758   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.461775   70458 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-324578 && echo "no-preload-324578" | sudo tee /etc/hostname
	I0311 21:34:02.583388   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-324578
	
	I0311 21:34:02.583414   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.586043   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586399   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.586431   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.586592   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.586799   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.586957   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.587084   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.587271   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.587433   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.587449   70458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-324578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-324578/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-324578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:02.702365   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:02.702399   70458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:02.702420   70458 buildroot.go:174] setting up certificates
	I0311 21:34:02.702431   70458 provision.go:84] configureAuth start
	I0311 21:34:02.702439   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetMachineName
	I0311 21:34:02.702725   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:02.705459   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.705882   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.705902   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.706048   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.708166   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708476   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.708502   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.708618   70458 provision.go:143] copyHostCerts
	I0311 21:34:02.708675   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:02.708684   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:02.708764   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:02.708875   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:02.708885   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:02.708911   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:02.708977   70458 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:02.708984   70458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:02.709005   70458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:02.709063   70458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.no-preload-324578 san=[127.0.0.1 192.168.39.36 localhost minikube no-preload-324578]
	I0311 21:34:02.823423   70458 provision.go:177] copyRemoteCerts
	I0311 21:34:02.823484   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:02.823508   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.826221   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826538   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.826584   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.826743   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.826974   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.827158   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.827344   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:02.912138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:02.938138   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0311 21:34:02.967391   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:02.992208   70458 provision.go:87] duration metric: took 289.765831ms to configureAuth
	I0311 21:34:02.992232   70458 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:02.992376   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:02.992440   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:02.994808   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995124   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:02.995154   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:02.995315   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:02.995490   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995640   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:02.995818   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:02.995997   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:02.996187   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:02.996202   70458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:03.283611   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:03.283643   70458 machine.go:97] duration metric: took 938.487892ms to provisionDockerMachine
	I0311 21:34:03.283655   70458 start.go:293] postStartSetup for "no-preload-324578" (driver="kvm2")
	I0311 21:34:03.283667   70458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:03.283681   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.284008   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:03.284043   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.286802   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287220   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.287262   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.287379   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.287546   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.287731   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.287930   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.372563   70458 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:03.377151   70458 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:03.377171   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:03.377225   70458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:03.377291   70458 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:03.377377   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:03.387792   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:03.412721   70458 start.go:296] duration metric: took 129.055927ms for postStartSetup
	I0311 21:34:03.412770   70458 fix.go:56] duration metric: took 21.487401487s for fixHost
	I0311 21:34:03.412790   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.415209   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415507   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.415533   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.415668   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.415866   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416035   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.416179   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.416353   70458 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:03.416502   70458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0311 21:34:03.416513   70458 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:03.525759   70458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192843.475283818
	
	I0311 21:34:03.525781   70458 fix.go:216] guest clock: 1710192843.475283818
	I0311 21:34:03.525790   70458 fix.go:229] Guest: 2024-03-11 21:34:03.475283818 +0000 UTC Remote: 2024-03-11 21:34:03.412775346 +0000 UTC m=+298.052241307 (delta=62.508472ms)
	I0311 21:34:03.525815   70458 fix.go:200] guest clock delta is within tolerance: 62.508472ms
	I0311 21:34:03.525833   70458 start.go:83] releasing machines lock for "no-preload-324578", held for 21.600490138s
	I0311 21:34:03.525866   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.526157   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:03.528771   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529117   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.529143   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.529272   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529721   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529897   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:03.529978   70458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:03.530022   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.530124   70458 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:03.530151   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:03.532450   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532624   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.532813   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.532843   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533001   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533010   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:03.533034   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:03.533171   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533197   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:03.533350   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533353   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:03.533504   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:03.533506   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.533639   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:03.614855   70458 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:03.638835   70458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:03.787832   70458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:03.794627   70458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:03.794677   70458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:03.811771   70458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:03.811790   70458 start.go:494] detecting cgroup driver to use...
	I0311 21:34:03.811845   70458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:03.829561   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:03.844536   70458 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:03.844582   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:03.859811   70458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:03.875041   70458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:03.991456   70458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:04.174783   70458 docker.go:233] disabling docker service ...
	I0311 21:34:04.174848   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:04.192524   70458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:04.206906   70458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:04.340047   70458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:04.455686   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:04.472512   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:04.495487   70458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:04.495550   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.506921   70458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:04.506997   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.519408   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.531418   70458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:04.543684   70458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:04.555846   70458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:04.567610   70458 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:04.567658   70458 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:04.583015   70458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:04.594515   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:04.715185   70458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:04.872750   70458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:04.872848   70458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:04.878207   70458 start.go:562] Will wait 60s for crictl version
	I0311 21:34:04.878250   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.882436   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:04.921007   70458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:04.921079   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.959326   70458 ssh_runner.go:195] Run: crio --version
	I0311 21:34:04.997595   70458 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0311 21:34:04.999092   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetIP
	I0311 21:34:05.002092   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002526   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:05.002566   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:05.002790   70458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:05.007758   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:05.023330   70458 kubeadm.go:877] updating cluster {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:05.023430   70458 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0311 21:34:05.023461   70458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:05.063043   70458 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0311 21:34:05.063071   70458 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:05.063161   70458 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.063170   70458 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.063183   70458 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.063190   70458 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.063233   70458 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0311 21:34:05.063171   70458 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.063272   70458 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.063307   70458 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065013   70458 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.065019   70458 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.065020   70458 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.065045   70458 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0311 21:34:05.065017   70458 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.065018   70458 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.065064   70458 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.065365   70458 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.209182   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.211431   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.220663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.230965   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.237859   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0311 21:34:05.260820   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.288596   70458 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0311 21:34:05.288651   70458 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.288697   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.324896   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.342987   70458 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0311 21:34:05.343030   70458 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.343080   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.371663   70458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.377262   70458 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0311 21:34:05.377306   70458 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.377349   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:04.792889   70604 main.go:141] libmachine: (embed-certs-743937) Waiting to get IP...
	I0311 21:34:04.793678   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:04.794097   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:04.794152   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:04.794064   71579 retry.go:31] will retry after 281.522937ms: waiting for machine to come up
	I0311 21:34:05.077518   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.077856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.077889   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.077814   71579 retry.go:31] will retry after 303.836522ms: waiting for machine to come up
	I0311 21:34:05.383244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.383796   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.383839   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.383758   71579 retry.go:31] will retry after 333.172379ms: waiting for machine to come up
	I0311 21:34:05.718117   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:05.718603   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:05.718630   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:05.718562   71579 retry.go:31] will retry after 469.046827ms: waiting for machine to come up
	I0311 21:34:06.189304   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.189748   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.189777   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.189705   71579 retry.go:31] will retry after 636.781259ms: waiting for machine to come up
	I0311 21:34:06.828672   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:06.829136   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:06.829174   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:06.829078   71579 retry.go:31] will retry after 758.609427ms: waiting for machine to come up
	I0311 21:34:07.589134   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:07.589490   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:07.589513   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:07.589466   71579 retry.go:31] will retry after 990.575872ms: waiting for machine to come up
	I0311 21:34:08.581971   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:08.582312   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:08.582344   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:08.582290   71579 retry.go:31] will retry after 1.142377902s: waiting for machine to come up
	I0311 21:34:05.421288   70458 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0311 21:34:05.421340   70458 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.421390   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473450   70458 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0311 21:34:05.473497   70458 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.473527   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 21:34:05.473545   70458 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0311 21:34:05.473584   70458 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.473603   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0311 21:34:05.473639   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473663   70458 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0311 21:34:05.473701   70458 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.473707   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0311 21:34:05.473730   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:34:05.473766   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 21:34:05.569510   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.569615   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.578915   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0311 21:34:05.578979   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579007   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:05.579029   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:05.579077   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:05.579117   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 21:34:05.579158   70458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 21:34:05.579209   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0311 21:34:05.579272   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:05.584413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0311 21:34:05.584425   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.584458   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0311 21:34:05.679191   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 21:34:05.679259   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0311 21:34:05.679288   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:05.679337   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679368   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0311 21:34:05.679369   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0311 21:34:05.679414   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:05.679428   70458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0311 21:34:05.679485   70458 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:07.621341   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.942028932s)
	I0311 21:34:07.621382   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0311 21:34:07.621385   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.941873405s)
	I0311 21:34:07.621413   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621424   70458 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (1.941989707s)
	I0311 21:34:07.621452   70458 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0311 21:34:07.621544   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.037072472s)
	I0311 21:34:07.621558   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0311 21:34:07.621580   70458 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:07.621627   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0311 21:34:09.726761   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:09.727207   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:09.727241   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:09.727153   71579 retry.go:31] will retry after 1.17092616s: waiting for machine to come up
	I0311 21:34:10.899311   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:10.899656   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:10.899675   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:10.899631   71579 retry.go:31] will retry after 1.870900402s: waiting for machine to come up
	I0311 21:34:12.771931   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:12.772421   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:12.772457   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:12.772375   71579 retry.go:31] will retry after 2.721804623s: waiting for machine to come up
	I0311 21:34:11.524646   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.902991705s)
	I0311 21:34:11.524683   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0311 21:34:11.524711   70458 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:11.524787   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0311 21:34:13.704750   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.179921724s)
	I0311 21:34:13.704786   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0311 21:34:13.704817   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:13.704868   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0311 21:34:15.496186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:15.496686   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:15.496722   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:15.496627   71579 retry.go:31] will retry after 2.568850361s: waiting for machine to come up
	I0311 21:34:18.068470   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:18.068926   70604 main.go:141] libmachine: (embed-certs-743937) DBG | unable to find current IP address of domain embed-certs-743937 in network mk-embed-certs-743937
	I0311 21:34:18.068959   70604 main.go:141] libmachine: (embed-certs-743937) DBG | I0311 21:34:18.068872   71579 retry.go:31] will retry after 4.111366971s: waiting for machine to come up
	I0311 21:34:16.267427   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.562528088s)
	I0311 21:34:16.267458   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0311 21:34:16.267486   70458 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:16.267535   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0311 21:34:17.218029   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 21:34:17.218065   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:17.218104   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0311 21:34:18.987120   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.768996335s)
	I0311 21:34:18.987149   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0311 21:34:18.987167   70458 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:18.987219   70458 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0311 21:34:23.543571   70908 start.go:364] duration metric: took 4m22.394278247s to acquireMachinesLock for "old-k8s-version-239315"
	I0311 21:34:23.543649   70908 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:23.543661   70908 fix.go:54] fixHost starting: 
	I0311 21:34:23.544084   70908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:23.544139   70908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:23.561669   70908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0311 21:34:23.562158   70908 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:23.562618   70908 main.go:141] libmachine: Using API Version  1
	I0311 21:34:23.562645   70908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:23.562949   70908 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:23.563114   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:23.563306   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetState
	I0311 21:34:23.565152   70908 fix.go:112] recreateIfNeeded on old-k8s-version-239315: state=Stopped err=<nil>
	I0311 21:34:23.565178   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	W0311 21:34:23.565351   70908 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:23.567943   70908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239315" ...
	I0311 21:34:22.182707   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183200   70604 main.go:141] libmachine: (embed-certs-743937) Found IP for machine: 192.168.50.114
	I0311 21:34:22.183228   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has current primary IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.183238   70604 main.go:141] libmachine: (embed-certs-743937) Reserving static IP address...
	I0311 21:34:22.183694   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.183716   70604 main.go:141] libmachine: (embed-certs-743937) DBG | skip adding static IP to network mk-embed-certs-743937 - found existing host DHCP lease matching {name: "embed-certs-743937", mac: "52:54:00:84:b4:7a", ip: "192.168.50.114"}
	I0311 21:34:22.183728   70604 main.go:141] libmachine: (embed-certs-743937) Reserved static IP address: 192.168.50.114
	I0311 21:34:22.183746   70604 main.go:141] libmachine: (embed-certs-743937) Waiting for SSH to be available...
	I0311 21:34:22.183760   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Getting to WaitForSSH function...
	I0311 21:34:22.185820   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186157   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.186193   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.186285   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH client type: external
	I0311 21:34:22.186317   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa (-rw-------)
	I0311 21:34:22.186349   70604 main.go:141] libmachine: (embed-certs-743937) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:22.186368   70604 main.go:141] libmachine: (embed-certs-743937) DBG | About to run SSH command:
	I0311 21:34:22.186384   70604 main.go:141] libmachine: (embed-certs-743937) DBG | exit 0
	I0311 21:34:22.313253   70604 main.go:141] libmachine: (embed-certs-743937) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:22.313570   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetConfigRaw
	I0311 21:34:22.314271   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.317040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317404   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.317509   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.317641   70604 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/config.json ...
	I0311 21:34:22.317814   70604 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:22.317830   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:22.318049   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.320550   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320833   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.320859   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.320992   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.321223   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321405   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.321547   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.321708   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.321930   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.321944   70604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:22.430028   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:22.430055   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430345   70604 buildroot.go:166] provisioning hostname "embed-certs-743937"
	I0311 21:34:22.430374   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.430568   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.433555   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.433884   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.433907   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.434102   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.434311   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434474   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.434611   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.434762   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.434936   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.434954   70604 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-743937 && echo "embed-certs-743937" | sudo tee /etc/hostname
	I0311 21:34:22.564819   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-743937
	
	I0311 21:34:22.564848   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.567667   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568075   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.568122   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.568325   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.568519   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568719   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.568913   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.569094   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:22.569335   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:22.569361   70604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-743937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-743937/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-743937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:22.684397   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:22.684425   70604 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:22.684473   70604 buildroot.go:174] setting up certificates
	I0311 21:34:22.684490   70604 provision.go:84] configureAuth start
	I0311 21:34:22.684507   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetMachineName
	I0311 21:34:22.684840   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:22.687805   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688156   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.688178   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.688401   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.690975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691302   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.691321   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.691469   70604 provision.go:143] copyHostCerts
	I0311 21:34:22.691528   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:22.691540   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:22.691598   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:22.691690   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:22.691706   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:22.691729   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:22.691834   70604 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:22.691850   70604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:22.691878   70604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:22.691946   70604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.embed-certs-743937 san=[127.0.0.1 192.168.50.114 embed-certs-743937 localhost minikube]
	I0311 21:34:22.838395   70604 provision.go:177] copyRemoteCerts
	I0311 21:34:22.838452   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:22.838478   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:22.840975   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841308   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:22.841342   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:22.841487   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:22.841684   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:22.841834   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:22.841968   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:22.924202   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:22.956079   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 21:34:22.982352   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 21:34:23.008286   70604 provision.go:87] duration metric: took 323.780619ms to configureAuth
	I0311 21:34:23.008311   70604 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:23.008481   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:34:23.008553   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.011128   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011439   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.011461   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.011632   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.011780   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.011919   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.012094   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.012278   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.012436   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.012452   70604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:23.288122   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:23.288146   70604 machine.go:97] duration metric: took 970.321311ms to provisionDockerMachine
	I0311 21:34:23.288157   70604 start.go:293] postStartSetup for "embed-certs-743937" (driver="kvm2")
	I0311 21:34:23.288167   70604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:23.288180   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.288496   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:23.288532   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.291434   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.291823   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.291856   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.292079   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.292297   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.292468   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.292629   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.376367   70604 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:23.381629   70604 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:23.381660   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:23.381754   70604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:23.381855   70604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:23.381967   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:23.392280   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:23.423241   70604 start.go:296] duration metric: took 135.071082ms for postStartSetup
	I0311 21:34:23.423283   70604 fix.go:56] duration metric: took 19.897275281s for fixHost
	I0311 21:34:23.423310   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.426264   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426623   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.426652   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.426862   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.427052   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427256   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.427419   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.427575   70604 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:23.427809   70604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0311 21:34:23.427822   70604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:23.543425   70604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192863.499269756
	
	I0311 21:34:23.543447   70604 fix.go:216] guest clock: 1710192863.499269756
	I0311 21:34:23.543454   70604 fix.go:229] Guest: 2024-03-11 21:34:23.499269756 +0000 UTC Remote: 2024-03-11 21:34:23.423289031 +0000 UTC m=+304.494814333 (delta=75.980725ms)
	I0311 21:34:23.543472   70604 fix.go:200] guest clock delta is within tolerance: 75.980725ms
	I0311 21:34:23.543478   70604 start.go:83] releasing machines lock for "embed-certs-743937", held for 20.0175167s
	I0311 21:34:23.543504   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.543746   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:23.546763   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547188   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.547223   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.547396   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.547882   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548077   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:34:23.548163   70604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:23.548226   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.548282   70604 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:23.548309   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:34:23.551186   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551485   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551609   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.551642   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.551795   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.551979   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:23.552001   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:23.552035   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552146   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:34:23.552211   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552277   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:34:23.552368   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.552501   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:34:23.552666   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:34:23.660064   70604 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:23.668731   70604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:23.831784   70604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:23.840331   70604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:23.840396   70604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:23.864730   70604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:23.864766   70604 start.go:494] detecting cgroup driver to use...
	I0311 21:34:23.864831   70604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:23.886072   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:23.901660   70604 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:23.901727   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:23.917374   70604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:23.932525   70604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:24.066368   70604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:24.222425   70604 docker.go:233] disabling docker service ...
	I0311 21:34:24.222487   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:24.240937   70604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:24.257050   70604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:24.395003   70604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:24.550709   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:24.572524   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:24.599710   70604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:34:24.599776   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.612426   70604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:24.612514   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.626989   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.639576   70604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:24.653711   70604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:24.673581   70604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:24.684772   70604 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:24.684841   70604 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:24.707855   70604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:24.719801   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:24.904788   70604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:25.063437   70604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:25.063511   70604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:25.070294   70604 start.go:562] Will wait 60s for crictl version
	I0311 21:34:25.070352   70604 ssh_runner.go:195] Run: which crictl
	I0311 21:34:25.074945   70604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:25.121979   70604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:25.122070   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.159092   70604 ssh_runner.go:195] Run: crio --version
	I0311 21:34:25.207391   70604 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:34:21.469205   70458 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.481954559s)
	I0311 21:34:21.469242   70458 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0311 21:34:21.469285   70458 cache_images.go:123] Successfully loaded all cached images
	I0311 21:34:21.469295   70458 cache_images.go:92] duration metric: took 16.40620232s to LoadCachedImages
	I0311 21:34:21.469306   70458 kubeadm.go:928] updating node { 192.168.39.36 8443 v1.29.0-rc.2 crio true true} ...
	I0311 21:34:21.469436   70458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-324578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:21.469513   70458 ssh_runner.go:195] Run: crio config
	I0311 21:34:21.531635   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:21.531659   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:21.531671   70458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:21.531690   70458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-324578 NodeName:no-preload-324578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:21.531820   70458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-324578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:21.531876   70458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0311 21:34:21.546000   70458 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:21.546060   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:21.558818   70458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0311 21:34:21.577685   70458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0311 21:34:21.595960   70458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0311 21:34:21.615003   70458 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:21.619290   70458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:21.633307   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:21.751586   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:21.771672   70458 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578 for IP: 192.168.39.36
	I0311 21:34:21.771698   70458 certs.go:194] generating shared ca certs ...
	I0311 21:34:21.771717   70458 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:21.771907   70458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:21.771975   70458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:21.771987   70458 certs.go:256] generating profile certs ...
	I0311 21:34:21.772093   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/client.key
	I0311 21:34:21.772190   70458 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key.681a9200
	I0311 21:34:21.772244   70458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key
	I0311 21:34:21.772371   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:21.772421   70458 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:21.772435   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:21.772475   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:21.772509   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:21.772542   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:21.772606   70458 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:21.773241   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:21.833566   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:21.868156   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:21.910118   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:21.952222   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0311 21:34:21.988148   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:22.018493   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:22.045225   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/no-preload-324578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:22.071481   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:22.097525   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:22.123425   70458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:22.156613   70458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:22.174679   70458 ssh_runner.go:195] Run: openssl version
	I0311 21:34:22.181137   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:22.197490   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203508   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.203556   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:22.210822   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:22.224269   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:22.237282   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242762   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.242816   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:22.249334   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:22.261866   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:22.273674   70458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279004   70458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.279055   70458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:22.285394   70458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:22.299493   70458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:22.304827   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:22.311349   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:22.318377   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:22.325621   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:22.332316   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:22.338893   70458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:22.345167   70458 kubeadm.go:391] StartCluster: {Name:no-preload-324578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-324578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:22.345246   70458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:22.345286   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.386703   70458 cri.go:89] found id: ""
	I0311 21:34:22.386785   70458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:22.398475   70458 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:22.398494   70458 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:22.398500   70458 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:22.398558   70458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:22.409434   70458 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:22.410675   70458 kubeconfig.go:125] found "no-preload-324578" server: "https://192.168.39.36:8443"
	I0311 21:34:22.412906   70458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:22.423677   70458 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.36
	I0311 21:34:22.423708   70458 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:22.423719   70458 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:22.423762   70458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:22.472548   70458 cri.go:89] found id: ""
	I0311 21:34:22.472615   70458 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:22.494701   70458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:22.506944   70458 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:22.506964   70458 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:22.507015   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:22.517468   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:22.517521   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:22.528281   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:22.538496   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:22.538533   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:22.553009   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.566120   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:22.566189   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:22.579239   70458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:22.590180   70458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:22.590227   70458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:22.602988   70458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:22.615631   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:22.730568   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.355205   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.588923   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.694870   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:23.796820   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:23.796918   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.297341   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.797197   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:24.840030   70458 api_server.go:72] duration metric: took 1.043209284s to wait for apiserver process to appear ...
	I0311 21:34:24.840062   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:24.840101   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:24.840560   70458 api_server.go:269] stopped: https://192.168.39.36:8443/healthz: Get "https://192.168.39.36:8443/healthz": dial tcp 192.168.39.36:8443: connect: connection refused
	I0311 21:34:25.341161   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:23.569356   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .Start
	I0311 21:34:23.569527   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring networks are active...
	I0311 21:34:23.570188   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network default is active
	I0311 21:34:23.570613   70908 main.go:141] libmachine: (old-k8s-version-239315) Ensuring network mk-old-k8s-version-239315 is active
	I0311 21:34:23.571070   70908 main.go:141] libmachine: (old-k8s-version-239315) Getting domain xml...
	I0311 21:34:23.571836   70908 main.go:141] libmachine: (old-k8s-version-239315) Creating domain...
	I0311 21:34:24.895619   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting to get IP...
	I0311 21:34:24.896680   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:24.897160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:24.897218   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:24.897131   71714 retry.go:31] will retry after 268.563191ms: waiting for machine to come up
	I0311 21:34:25.167783   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.168312   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.168343   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.168268   71714 retry.go:31] will retry after 245.059124ms: waiting for machine to come up
	I0311 21:34:25.414644   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.415139   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.415168   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.415100   71714 retry.go:31] will retry after 407.807793ms: waiting for machine to come up
	I0311 21:34:25.824887   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:25.825351   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:25.825379   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:25.825274   71714 retry.go:31] will retry after 503.187834ms: waiting for machine to come up
	I0311 21:34:25.208819   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetIP
	I0311 21:34:25.211726   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212203   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:34:25.212244   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:34:25.212486   70604 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:25.217365   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:25.233670   70604 kubeadm.go:877] updating cluster {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:25.233825   70604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:34:25.233886   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:25.282028   70604 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:34:25.282108   70604 ssh_runner.go:195] Run: which lz4
	I0311 21:34:25.287047   70604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:25.291721   70604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:25.291751   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:34:27.414481   70604 crio.go:444] duration metric: took 2.127464595s to copy over tarball
	I0311 21:34:27.414554   70604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:28.225996   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.226031   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.226048   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.285274   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:28.285307   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:28.340493   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.512353   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.512409   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:28.840800   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:28.852523   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:28.852560   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.341135   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.354997   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:29.355028   70458 api_server.go:103] status: https://192.168.39.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:29.840769   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:34:29.848023   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:34:29.856262   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:34:29.856290   70458 api_server.go:131] duration metric: took 5.016219789s to wait for apiserver health ...
	I0311 21:34:29.856300   70458 cni.go:84] Creating CNI manager for ""
	I0311 21:34:29.856308   70458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:29.858297   70458 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:29.859734   70458 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:29.891375   70458 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:29.932393   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:29.959208   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:29.959257   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:29.959268   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:29.959278   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:29.959290   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:29.959296   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:34:29.959303   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:29.959319   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:29.959335   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:34:29.959343   70458 system_pods.go:74] duration metric: took 26.926978ms to wait for pod list to return data ...
	I0311 21:34:29.959355   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:29.963151   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:29.963179   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:29.963193   70458 node_conditions.go:105] duration metric: took 3.825246ms to run NodePressure ...
	I0311 21:34:29.963209   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:26.330005   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:26.330547   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:26.330569   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:26.330464   71714 retry.go:31] will retry after 723.914956ms: waiting for machine to come up
	I0311 21:34:27.056271   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.056879   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.056901   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.056834   71714 retry.go:31] will retry after 693.583075ms: waiting for machine to come up
	I0311 21:34:27.752514   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:27.752958   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:27.752980   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:27.752916   71714 retry.go:31] will retry after 902.247864ms: waiting for machine to come up
	I0311 21:34:28.657551   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:28.658023   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:28.658079   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:28.658008   71714 retry.go:31] will retry after 1.140425887s: waiting for machine to come up
	I0311 21:34:29.800305   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:29.800824   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:29.800852   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:29.800774   71714 retry.go:31] will retry after 1.68593342s: waiting for machine to come up
	I0311 21:34:32.367999   70458 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.404768175s)
	I0311 21:34:32.368034   70458 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375444   70458 kubeadm.go:733] kubelet initialised
	I0311 21:34:32.375468   70458 kubeadm.go:734] duration metric: took 7.423643ms waiting for restarted kubelet to initialise ...
	I0311 21:34:32.375477   70458 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:32.383579   70458 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.389728   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389755   70458 pod_ready.go:81] duration metric: took 6.144226ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.389766   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "coredns-76f75df574-s6lsb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.389775   70458 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.398797   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398822   70458 pod_ready.go:81] duration metric: took 9.033188ms for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.398833   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "etcd-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.398841   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.407870   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407905   70458 pod_ready.go:81] duration metric: took 9.056349ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.407915   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-apiserver-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.407928   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.414434   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414455   70458 pod_ready.go:81] duration metric: took 6.519611ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.414463   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.414468   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:32.771994   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772025   70458 pod_ready.go:81] duration metric: took 357.549783ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:32.772034   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-proxy-rmz4b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:32.772041   70458 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.175562   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175595   70458 pod_ready.go:81] duration metric: took 403.546508ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.175608   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "kube-scheduler-no-preload-324578" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.175617   70458 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:33.573749   70458 pod_ready.go:97] node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573777   70458 pod_ready.go:81] duration metric: took 398.141162ms for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:34:33.573789   70458 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-324578" hosting pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:33.573799   70458 pod_ready.go:38] duration metric: took 1.198311127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:33.573862   70458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:34:33.592112   70458 ops.go:34] apiserver oom_adj: -16
	I0311 21:34:33.592148   70458 kubeadm.go:591] duration metric: took 11.193640837s to restartPrimaryControlPlane
	I0311 21:34:33.592161   70458 kubeadm.go:393] duration metric: took 11.247001751s to StartCluster
	I0311 21:34:33.592181   70458 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.592269   70458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:33.594144   70458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:33.594461   70458 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:34:33.596303   70458 out.go:177] * Verifying Kubernetes components...
	I0311 21:34:33.594553   70458 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:34:33.594702   70458 config.go:182] Loaded profile config "no-preload-324578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0311 21:34:33.597724   70458 addons.go:69] Setting default-storageclass=true in profile "no-preload-324578"
	I0311 21:34:33.597727   70458 addons.go:69] Setting storage-provisioner=true in profile "no-preload-324578"
	I0311 21:34:33.597739   70458 addons.go:69] Setting metrics-server=true in profile "no-preload-324578"
	I0311 21:34:33.597759   70458 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-324578"
	I0311 21:34:33.597771   70458 addons.go:234] Setting addon storage-provisioner=true in "no-preload-324578"
	I0311 21:34:33.597772   70458 addons.go:234] Setting addon metrics-server=true in "no-preload-324578"
	W0311 21:34:33.597780   70458 addons.go:243] addon storage-provisioner should already be in state true
	W0311 21:34:33.597795   70458 addons.go:243] addon metrics-server should already be in state true
	I0311 21:34:33.597828   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597838   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.597733   70458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:33.598079   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598110   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598224   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598260   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.598305   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.598269   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.613473   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0311 21:34:33.613994   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.614558   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.614576   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.614946   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.615385   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.615415   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.618026   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0311 21:34:33.618201   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0311 21:34:33.618370   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618497   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.618818   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618833   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.618978   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.618989   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.619157   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619343   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.619389   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.619926   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.619956   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.623211   70458 addons.go:234] Setting addon default-storageclass=true in "no-preload-324578"
	W0311 21:34:33.623232   70458 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:34:33.623260   70458 host.go:66] Checking if "no-preload-324578" exists ...
	I0311 21:34:33.623634   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.623660   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.635263   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I0311 21:34:33.635575   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.636071   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.636080   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.636462   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.636606   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.638520   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.640583   70458 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:34:33.642029   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:34:33.642045   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:34:33.642058   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.640562   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0311 21:34:33.641020   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0311 21:34:33.642572   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.643082   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.643107   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.643432   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.644002   70458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:33.644030   70458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:33.644213   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.644711   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.644733   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.645120   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.645334   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.645406   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.645861   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.645888   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.646042   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.646332   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.646548   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.646719   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.646986   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.648681   70458 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:30.659466   70604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.244884989s)
	I0311 21:34:30.659492   70604 crio.go:451] duration metric: took 3.244983149s to extract the tarball
	I0311 21:34:30.659500   70604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:30.708661   70604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:30.769502   70604 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:34:30.769530   70604 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:34:30.769540   70604 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.28.4 crio true true} ...
	I0311 21:34:30.769675   70604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-743937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:30.769757   70604 ssh_runner.go:195] Run: crio config
	I0311 21:34:30.820223   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:30.820251   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:30.820267   70604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:30.820296   70604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-743937 NodeName:embed-certs-743937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:34:30.820475   70604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-743937"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:30.820563   70604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:34:30.833086   70604 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:30.833175   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:30.844335   70604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0311 21:34:30.863586   70604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:30.883598   70604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0311 21:34:30.904711   70604 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:30.909433   70604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:30.924054   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:31.064573   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:31.096931   70604 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937 for IP: 192.168.50.114
	I0311 21:34:31.096960   70604 certs.go:194] generating shared ca certs ...
	I0311 21:34:31.096980   70604 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:31.097157   70604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:31.097220   70604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:31.097236   70604 certs.go:256] generating profile certs ...
	I0311 21:34:31.097368   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/client.key
	I0311 21:34:31.097453   70604 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key.c230aed9
	I0311 21:34:31.097520   70604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key
	I0311 21:34:31.097660   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:31.097709   70604 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:31.097770   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:31.097826   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:31.097867   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:31.097899   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:31.097958   70604 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:31.098771   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:31.135109   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:31.173483   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:31.215059   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:31.253244   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0311 21:34:31.305450   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 21:34:31.340238   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:31.366993   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/embed-certs-743937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:31.393936   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:31.420998   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:31.446500   70604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:31.474047   70604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:31.493935   70604 ssh_runner.go:195] Run: openssl version
	I0311 21:34:31.500607   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:31.513874   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519255   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.519303   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:31.525967   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:31.538995   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:31.551625   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557235   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.557292   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:31.563658   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:31.576689   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:31.589299   70604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594405   70604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.594453   70604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:31.601041   70604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:31.619307   70604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:31.624565   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:31.632121   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:31.638843   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:31.646400   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:31.652701   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:31.659661   70604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:31.666390   70604 kubeadm.go:391] StartCluster: {Name:embed-certs-743937 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-743937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:31.666496   70604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:31.666546   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.716714   70604 cri.go:89] found id: ""
	I0311 21:34:31.716796   70604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:31.733945   70604 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:31.733967   70604 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:31.733974   70604 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:31.734019   70604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:31.746543   70604 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:31.747720   70604 kubeconfig.go:125] found "embed-certs-743937" server: "https://192.168.50.114:8443"
	I0311 21:34:31.749670   70604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:31.762374   70604 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0311 21:34:31.762401   70604 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:31.762410   70604 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:31.762462   70604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:31.811965   70604 cri.go:89] found id: ""
	I0311 21:34:31.812055   70604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:31.836539   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:31.849272   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:31.849295   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:31.849348   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:31.861345   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:31.861423   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:31.875436   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:31.887183   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:31.887251   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:31.900032   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.911614   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:31.911690   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:31.924791   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:31.937131   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:31.937204   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:31.949123   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:31.960234   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.089622   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:32.806370   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.033263   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.135981   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:33.248827   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:33.248917   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.749207   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:33.650190   70458 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:33.650207   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:34:33.650223   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.653451   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.653895   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.653920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.654131   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.654302   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.654472   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.654631   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.689121   70458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0311 21:34:33.689487   70458 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:33.693084   70458 main.go:141] libmachine: Using API Version  1
	I0311 21:34:33.693105   70458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:33.693596   70458 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:33.693796   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetState
	I0311 21:34:33.696074   70458 main.go:141] libmachine: (no-preload-324578) Calling .DriverName
	I0311 21:34:33.696629   70458 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:33.696644   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:34:33.696662   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHHostname
	I0311 21:34:33.699920   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700323   70458 main.go:141] libmachine: (no-preload-324578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:fc:98", ip: ""} in network mk-no-preload-324578: {Iface:virbr1 ExpiryTime:2024-03-11 22:33:54 +0000 UTC Type:0 Mac:52:54:00:00:fc:98 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:no-preload-324578 Clientid:01:52:54:00:00:fc:98}
	I0311 21:34:33.700342   70458 main.go:141] libmachine: (no-preload-324578) DBG | domain no-preload-324578 has defined IP address 192.168.39.36 and MAC address 52:54:00:00:fc:98 in network mk-no-preload-324578
	I0311 21:34:33.700564   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHPort
	I0311 21:34:33.700756   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHKeyPath
	I0311 21:34:33.700859   70458 main.go:141] libmachine: (no-preload-324578) Calling .GetSSHUsername
	I0311 21:34:33.700932   70458 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/no-preload-324578/id_rsa Username:docker}
	I0311 21:34:33.896331   70458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:33.969322   70458 node_ready.go:35] waiting up to 6m0s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:34.037114   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:34:34.059051   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:34:34.059080   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:34:34.094822   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:34:34.142231   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:34:34.142259   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:34:34.218979   70458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:34.219002   70458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:34:34.260381   70458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:34:35.648210   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.61103949s)
	I0311 21:34:35.648241   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.553388189s)
	I0311 21:34:35.648344   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648381   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648367   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648409   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648658   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.648675   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.648685   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.648694   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.648754   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.648997   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.649019   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650050   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650068   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.650091   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.650101   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.650367   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.650384   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.658738   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.658764   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.658991   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.659007   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687393   70458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.426969773s)
	I0311 21:34:35.687453   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687467   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687771   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.687810   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.687828   70458 main.go:141] libmachine: Making call to close driver server
	I0311 21:34:35.687848   70458 main.go:141] libmachine: (no-preload-324578) Calling .Close
	I0311 21:34:35.687831   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688142   70458 main.go:141] libmachine: (no-preload-324578) DBG | Closing plugin on server side
	I0311 21:34:35.688164   70458 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:34:35.688178   70458 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:34:35.688214   70458 addons.go:470] Verifying addon metrics-server=true in "no-preload-324578"
	I0311 21:34:35.690413   70458 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:34:31.488010   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:31.488449   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:31.488471   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:31.488421   71714 retry.go:31] will retry after 2.325869089s: waiting for machine to come up
	I0311 21:34:33.815568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:33.816215   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:33.816236   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:33.816176   71714 retry.go:31] will retry after 2.457084002s: waiting for machine to come up
	I0311 21:34:34.249462   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.749177   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:34.778830   70604 api_server.go:72] duration metric: took 1.530004395s to wait for apiserver process to appear ...
	I0311 21:34:34.778858   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:34:34.778879   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:34.779469   70604 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0311 21:34:35.279027   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.110193   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.110221   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.110234   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.159861   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:34:38.159909   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:34:38.279045   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.289460   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.289491   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:38.779423   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:38.785174   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:38.785206   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.278910   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.290017   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:34:39.290054   70604 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:34:39.779616   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:34:39.786362   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:34:39.794557   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:34:39.794583   70604 api_server.go:131] duration metric: took 5.01571788s to wait for apiserver health ...
	I0311 21:34:39.794594   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:34:39.794601   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:39.796063   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:34:35.691844   70458 addons.go:505] duration metric: took 2.097304232s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:34:35.974533   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:37.983073   70458 node_ready.go:53] node "no-preload-324578" has status "Ready":"False"
	I0311 21:34:38.977713   70458 node_ready.go:49] node "no-preload-324578" has status "Ready":"True"
	I0311 21:34:38.977738   70458 node_ready.go:38] duration metric: took 5.008382488s for node "no-preload-324578" to be "Ready" ...
	I0311 21:34:38.977749   70458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:38.986414   70458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993430   70458 pod_ready.go:92] pod "coredns-76f75df574-s6lsb" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:38.993454   70458 pod_ready.go:81] duration metric: took 7.012539ms for pod "coredns-76f75df574-s6lsb" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:38.993465   70458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:36.274640   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:36.275119   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:36.275157   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:36.275064   71714 retry.go:31] will retry after 3.618026102s: waiting for machine to come up
	I0311 21:34:39.894877   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:39.895397   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | unable to find current IP address of domain old-k8s-version-239315 in network mk-old-k8s-version-239315
	I0311 21:34:39.895447   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | I0311 21:34:39.895343   71714 retry.go:31] will retry after 3.826847061s: waiting for machine to come up
	I0311 21:34:39.797420   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:34:39.810877   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:34:39.836773   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:34:39.852496   70604 system_pods.go:59] 8 kube-system pods found
	I0311 21:34:39.852541   70604 system_pods.go:61] "coredns-5dd5756b68-czng9" [a57d0643-36c5-44e2-a113-de051d0e0408] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:34:39.852556   70604 system_pods.go:61] "etcd-embed-certs-743937" [9f0051e8-247f-4968-a834-c38c5f0c4407] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:34:39.852567   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [4ac979a6-1906-4a58-9d41-9587d66d81ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:34:39.852578   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [263ba100-e911-4857-a973-c4dc9312a653] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:34:39.852591   70604 system_pods.go:61] "kube-proxy-n2qzt" [21f56cfb-a3f5-4c4b-993d-53b6d8f60ec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:34:39.852600   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [0121fa4d-91a8-432b-9f21-c6e8c0b33872] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:34:39.852606   70604 system_pods.go:61] "metrics-server-57f55c9bc5-7qw98" [3d3f2e87-2e36-4ca3-b31c-fc5f38251f03] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:34:39.852617   70604 system_pods.go:61] "storage-provisioner" [72fd13c7-1a79-4e8a-bdc2-f45117599d85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:34:39.852624   70604 system_pods.go:74] duration metric: took 15.823708ms to wait for pod list to return data ...
	I0311 21:34:39.852634   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:34:39.856288   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:34:39.856309   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:34:39.856317   70604 node_conditions.go:105] duration metric: took 3.676347ms to run NodePressure ...
	I0311 21:34:39.856331   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:40.103882   70604 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108726   70604 kubeadm.go:733] kubelet initialised
	I0311 21:34:40.108758   70604 kubeadm.go:734] duration metric: took 4.847245ms waiting for restarted kubelet to initialise ...
	I0311 21:34:40.108768   70604 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:34:40.115566   70604 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:42.124435   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.026187   70417 start.go:364] duration metric: took 58.09976601s to acquireMachinesLock for "default-k8s-diff-port-766430"
	I0311 21:34:45.026231   70417 start.go:96] Skipping create...Using existing machine configuration
	I0311 21:34:45.026242   70417 fix.go:54] fixHost starting: 
	I0311 21:34:45.026632   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:34:45.026661   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:34:45.046341   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0311 21:34:45.046779   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:34:45.047336   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:34:45.047375   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:34:45.047741   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:34:45.047920   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:34:45.048090   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:34:45.049581   70417 fix.go:112] recreateIfNeeded on default-k8s-diff-port-766430: state=Stopped err=<nil>
	I0311 21:34:45.049605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	W0311 21:34:45.049759   70417 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 21:34:45.051505   70417 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-766430" ...
	I0311 21:34:41.001474   70458 pod_ready.go:102] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:43.500991   70458 pod_ready.go:92] pod "etcd-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.501018   70458 pod_ready.go:81] duration metric: took 4.507545237s for pod "etcd-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.501030   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506732   70458 pod_ready.go:92] pod "kube-apiserver-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.506753   70458 pod_ready.go:81] duration metric: took 5.714866ms for pod "kube-apiserver-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.506764   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511432   70458 pod_ready.go:92] pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.511456   70458 pod_ready.go:81] duration metric: took 4.684021ms for pod "kube-controller-manager-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.511469   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516333   70458 pod_ready.go:92] pod "kube-proxy-rmz4b" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.516360   70458 pod_ready.go:81] duration metric: took 4.882955ms for pod "kube-proxy-rmz4b" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.516370   70458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521501   70458 pod_ready.go:92] pod "kube-scheduler-no-preload-324578" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:43.521524   70458 pod_ready.go:81] duration metric: took 5.146945ms for pod "kube-scheduler-no-preload-324578" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.521532   70458 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:43.723851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724335   70908 main.go:141] libmachine: (old-k8s-version-239315) Found IP for machine: 192.168.72.52
	I0311 21:34:43.724367   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserving static IP address...
	I0311 21:34:43.724382   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has current primary IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.724722   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.724759   70908 main.go:141] libmachine: (old-k8s-version-239315) Reserved static IP address: 192.168.72.52
	I0311 21:34:43.724774   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | skip adding static IP to network mk-old-k8s-version-239315 - found existing host DHCP lease matching {name: "old-k8s-version-239315", mac: "52:54:00:5b:9d:32", ip: "192.168.72.52"}
	I0311 21:34:43.724797   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Getting to WaitForSSH function...
	I0311 21:34:43.724815   70908 main.go:141] libmachine: (old-k8s-version-239315) Waiting for SSH to be available...
	I0311 21:34:43.727015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727330   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.727354   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.727541   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH client type: external
	I0311 21:34:43.727568   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa (-rw-------)
	I0311 21:34:43.727624   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:34:43.727641   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | About to run SSH command:
	I0311 21:34:43.727651   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | exit 0
	I0311 21:34:43.848884   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | SSH cmd err, output: <nil>: 
	I0311 21:34:43.849287   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetConfigRaw
	I0311 21:34:43.850084   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:43.852942   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853529   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.853572   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.853801   70908 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/config.json ...
	I0311 21:34:43.854001   70908 machine.go:94] provisionDockerMachine start ...
	I0311 21:34:43.854024   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:43.854255   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.856623   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857153   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.857187   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.857321   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.857516   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857702   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.857897   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.858105   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.858332   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.858349   70908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:34:43.961617   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:34:43.961664   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.961921   70908 buildroot.go:166] provisioning hostname "old-k8s-version-239315"
	I0311 21:34:43.961945   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:43.962134   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:43.964672   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.964987   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:43.965015   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:43.965122   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:43.965305   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965466   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:43.965591   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:43.965801   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:43.966042   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:43.966055   70908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239315 && echo "old-k8s-version-239315" | sudo tee /etc/hostname
	I0311 21:34:44.088097   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239315
	
	I0311 21:34:44.088126   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.090911   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091167   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.091205   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.091347   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.091524   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091680   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.091818   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.091984   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.092185   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.092205   70908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239315/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:34:44.207643   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:34:44.207674   70908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:34:44.207693   70908 buildroot.go:174] setting up certificates
	I0311 21:34:44.207701   70908 provision.go:84] configureAuth start
	I0311 21:34:44.207710   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetMachineName
	I0311 21:34:44.207975   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:44.211160   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211556   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.211588   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.211754   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.214211   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214553   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.214585   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.214732   70908 provision.go:143] copyHostCerts
	I0311 21:34:44.214797   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:34:44.214813   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:34:44.214886   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:34:44.214991   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:34:44.215005   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:34:44.215035   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:34:44.215160   70908 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:34:44.215171   70908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:34:44.215198   70908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:34:44.215267   70908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239315 san=[127.0.0.1 192.168.72.52 localhost minikube old-k8s-version-239315]
	I0311 21:34:44.305250   70908 provision.go:177] copyRemoteCerts
	I0311 21:34:44.305329   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:34:44.305367   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.308244   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308636   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.308673   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.308874   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.309092   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.309290   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.309446   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.394958   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:34:44.423314   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0311 21:34:44.459338   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:34:44.491201   70908 provision.go:87] duration metric: took 283.487383ms to configureAuth
	I0311 21:34:44.491232   70908 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:34:44.491419   70908 config.go:182] Loaded profile config "old-k8s-version-239315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:34:44.491484   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.494039   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494476   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.494509   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.494638   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.494830   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.494998   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.495175   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.495366   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.495548   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.495570   70908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:34:44.787935   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:34:44.787961   70908 machine.go:97] duration metric: took 933.945971ms to provisionDockerMachine
	I0311 21:34:44.787971   70908 start.go:293] postStartSetup for "old-k8s-version-239315" (driver="kvm2")
	I0311 21:34:44.787983   70908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:34:44.788007   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:44.788327   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:34:44.788355   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.791133   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791460   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.791492   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.791637   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.791858   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.792021   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.792165   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:44.877163   70908 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:34:44.882141   70908 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:34:44.882164   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:34:44.882241   70908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:34:44.882330   70908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:34:44.882442   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:34:44.894699   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:44.919809   70908 start.go:296] duration metric: took 131.8264ms for postStartSetup
	I0311 21:34:44.919848   70908 fix.go:56] duration metric: took 21.376188092s for fixHost
	I0311 21:34:44.919867   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:44.922414   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922708   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:44.922738   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:44.922876   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:44.923075   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923274   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:44.923455   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:44.923618   70908 main.go:141] libmachine: Using SSH client type: native
	I0311 21:34:44.923806   70908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.72.52 22 <nil> <nil>}
	I0311 21:34:44.923831   70908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:34:45.026068   70908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192885.004450463
	
	I0311 21:34:45.026088   70908 fix.go:216] guest clock: 1710192885.004450463
	I0311 21:34:45.026096   70908 fix.go:229] Guest: 2024-03-11 21:34:45.004450463 +0000 UTC Remote: 2024-03-11 21:34:44.919851167 +0000 UTC m=+283.922086595 (delta=84.599296ms)
	I0311 21:34:45.026118   70908 fix.go:200] guest clock delta is within tolerance: 84.599296ms
	I0311 21:34:45.026124   70908 start.go:83] releasing machines lock for "old-k8s-version-239315", held for 21.482500591s
	I0311 21:34:45.026158   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.026440   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:45.029366   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029778   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.029813   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.029992   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030514   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030711   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .DriverName
	I0311 21:34:45.030800   70908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:34:45.030846   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.030946   70908 ssh_runner.go:195] Run: cat /version.json
	I0311 21:34:45.030971   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHHostname
	I0311 21:34:45.033851   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.033989   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034264   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034292   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034324   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:45.034348   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:45.034429   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034618   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034633   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHPort
	I0311 21:34:45.034799   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHKeyPath
	I0311 21:34:45.034814   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034979   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetSSHUsername
	I0311 21:34:45.034977   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.035143   70908 sshutil.go:53] new ssh client: &{IP:192.168.72.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/old-k8s-version-239315/id_rsa Username:docker}
	I0311 21:34:45.135748   70908 ssh_runner.go:195] Run: systemctl --version
	I0311 21:34:45.142408   70908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:34:45.297445   70908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:34:45.304482   70908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:34:45.304552   70908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:34:45.322754   70908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:34:45.322775   70908 start.go:494] detecting cgroup driver to use...
	I0311 21:34:45.322832   70908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:34:45.345988   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:34:45.363267   70908 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:34:45.363320   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:34:45.380892   70908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:34:45.396972   70908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:34:45.531640   70908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:34:45.700243   70908 docker.go:233] disabling docker service ...
	I0311 21:34:45.700306   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:34:45.730542   70908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:34:45.749068   70908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:34:45.903721   70908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:34:46.045122   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:34:46.065278   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:34:46.090726   70908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0311 21:34:46.090779   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.105783   70908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:34:46.105841   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.121702   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.136262   70908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:34:46.150628   70908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:34:46.163771   70908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:34:46.175613   70908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:34:46.175675   70908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:34:46.193848   70908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:34:46.205694   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:46.344832   70908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:34:46.501773   70908 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:34:46.501851   70908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:34:46.507932   70908 start.go:562] Will wait 60s for crictl version
	I0311 21:34:46.507988   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:46.512337   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:34:46.555165   70908 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:34:46.555249   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.588554   70908 ssh_runner.go:195] Run: crio --version
	I0311 21:34:46.623785   70908 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0311 21:34:44.627149   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.128405   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:45.052882   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Start
	I0311 21:34:45.053039   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring networks are active...
	I0311 21:34:45.053710   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network default is active
	I0311 21:34:45.054156   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Ensuring network mk-default-k8s-diff-port-766430 is active
	I0311 21:34:45.054499   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Getting domain xml...
	I0311 21:34:45.055347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Creating domain...
	I0311 21:34:46.378216   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting to get IP...
	I0311 21:34:46.379054   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379376   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.379485   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.379392   71893 retry.go:31] will retry after 242.915621ms: waiting for machine to come up
	I0311 21:34:46.623729   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624348   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.624375   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.624304   71893 retry.go:31] will retry after 274.237436ms: waiting for machine to come up
	I0311 21:34:46.899864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900347   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:46.900381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:46.900296   71893 retry.go:31] will retry after 333.693752ms: waiting for machine to come up
	I0311 21:34:47.235751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236278   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.236309   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.236220   71893 retry.go:31] will retry after 513.728994ms: waiting for machine to come up
	I0311 21:34:47.752081   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:47.752622   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:47.752553   71893 retry.go:31] will retry after 575.202217ms: waiting for machine to come up
	I0311 21:34:48.329095   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329524   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:48.329557   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:48.329477   71893 retry.go:31] will retry after 741.05703ms: waiting for machine to come up
	I0311 21:34:49.072641   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073163   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.073195   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.073101   71893 retry.go:31] will retry after 802.911807ms: waiting for machine to come up
	I0311 21:34:45.528876   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:47.530391   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:49.530451   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:46.625154   70908 main.go:141] libmachine: (old-k8s-version-239315) Calling .GetIP
	I0311 21:34:46.627732   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628080   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:9d:32", ip: ""} in network mk-old-k8s-version-239315: {Iface:virbr3 ExpiryTime:2024-03-11 22:34:37 +0000 UTC Type:0 Mac:52:54:00:5b:9d:32 Iaid: IPaddr:192.168.72.52 Prefix:24 Hostname:old-k8s-version-239315 Clientid:01:52:54:00:5b:9d:32}
	I0311 21:34:46.628102   70908 main.go:141] libmachine: (old-k8s-version-239315) DBG | domain old-k8s-version-239315 has defined IP address 192.168.72.52 and MAC address 52:54:00:5b:9d:32 in network mk-old-k8s-version-239315
	I0311 21:34:46.628304   70908 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0311 21:34:46.633367   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:46.649537   70908 kubeadm.go:877] updating cluster {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:34:46.649677   70908 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 21:34:46.649733   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:46.699194   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:46.699264   70908 ssh_runner.go:195] Run: which lz4
	I0311 21:34:46.703944   70908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:34:46.709224   70908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:34:46.709258   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0311 21:34:48.747926   70908 crio.go:444] duration metric: took 2.044006932s to copy over tarball
	I0311 21:34:48.747994   70908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:34:49.629334   70604 pod_ready.go:102] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:51.122454   70604 pod_ready.go:92] pod "coredns-5dd5756b68-czng9" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:51.122481   70604 pod_ready.go:81] duration metric: took 11.006878828s for pod "coredns-5dd5756b68-czng9" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:51.122494   70604 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.227971   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.228001   70604 pod_ready.go:81] duration metric: took 1.105498501s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.228014   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234804   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.234834   70604 pod_ready.go:81] duration metric: took 6.811865ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.234854   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241448   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.241473   70604 pod_ready.go:81] duration metric: took 6.611927ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.241486   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249614   70604 pod_ready.go:92] pod "kube-proxy-n2qzt" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:52.249648   70604 pod_ready.go:81] duration metric: took 8.154372ms for pod "kube-proxy-n2qzt" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:52.249661   70604 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139924   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:34:53.139951   70604 pod_ready.go:81] duration metric: took 890.27792ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:53.139961   70604 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	I0311 21:34:49.877965   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878438   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:49.878460   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:49.878397   71893 retry.go:31] will retry after 1.163030899s: waiting for machine to come up
	I0311 21:34:51.042660   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043181   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:51.043210   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:51.043131   71893 retry.go:31] will retry after 1.225509553s: waiting for machine to come up
	I0311 21:34:52.269779   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270321   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:52.270358   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:52.270250   71893 retry.go:31] will retry after 2.091046831s: waiting for machine to come up
	I0311 21:34:54.363231   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363664   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:54.363693   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:54.363618   71893 retry.go:31] will retry after 1.759309864s: waiting for machine to come up
	I0311 21:34:52.031032   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:54.529537   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:52.300295   70908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55227284s)
	I0311 21:34:52.300322   70908 crio.go:451] duration metric: took 3.552370125s to extract the tarball
	I0311 21:34:52.300331   70908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:34:52.349405   70908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:34:52.395791   70908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0311 21:34:52.395821   70908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 21:34:52.395892   70908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.395955   70908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.396002   70908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.396010   70908 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.395959   70908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.395932   70908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.395921   70908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0311 21:34:52.395974   70908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397721   70908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.397760   70908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.397767   70908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.397768   70908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.397762   70908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.397804   70908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.398008   70908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0311 21:34:52.398129   70908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.548255   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.549300   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.560293   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.564094   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.564433   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.569516   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.578251   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0311 21:34:52.674385   70908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0311 21:34:52.674427   70908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.674475   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.725602   70908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:34:52.741797   70908 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0311 21:34:52.741840   70908 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.741882   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.793195   70908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0311 21:34:52.793239   70908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.793278   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798118   70908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0311 21:34:52.798174   70908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.798220   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798241   70908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0311 21:34:52.798277   70908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.798312   70908 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0311 21:34:52.798333   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798285   70908 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0311 21:34:52.798378   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0311 21:34:52.798399   70908 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:52.798434   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.798336   70908 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0311 21:34:52.798510   70908 ssh_runner.go:195] Run: which crictl
	I0311 21:34:52.957658   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0311 21:34:52.957712   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0311 21:34:52.957765   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0311 21:34:52.957816   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0311 21:34:52.957846   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0311 21:34:52.957904   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0311 21:34:52.957925   70908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0311 21:34:53.106649   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0311 21:34:53.106699   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0311 21:34:53.106913   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0311 21:34:53.107837   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0311 21:34:53.116024   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0311 21:34:53.122060   70908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0311 21:34:53.122118   70908 cache_images.go:92] duration metric: took 726.282306ms to LoadCachedImages
	W0311 21:34:53.122205   70908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18358-11004/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0311 21:34:53.122224   70908 kubeadm.go:928] updating node { 192.168.72.52 8443 v1.20.0 crio true true} ...
	I0311 21:34:53.122341   70908 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239315 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:34:53.122443   70908 ssh_runner.go:195] Run: crio config
	I0311 21:34:53.192161   70908 cni.go:84] Creating CNI manager for ""
	I0311 21:34:53.192191   70908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:34:53.192211   70908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:34:53.192233   70908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.52 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239315 NodeName:old-k8s-version-239315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0311 21:34:53.192405   70908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239315"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:34:53.192476   70908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0311 21:34:53.203965   70908 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:34:53.204019   70908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:34:53.215221   70908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0311 21:34:53.235943   70908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:34:53.255383   70908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0311 21:34:53.276634   70908 ssh_runner.go:195] Run: grep 192.168.72.52	control-plane.minikube.internal$ /etc/hosts
	I0311 21:34:53.281778   70908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:34:53.298479   70908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:34:53.450052   70908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:34:53.472459   70908 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315 for IP: 192.168.72.52
	I0311 21:34:53.472480   70908 certs.go:194] generating shared ca certs ...
	I0311 21:34:53.472524   70908 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:53.472676   70908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:34:53.472728   70908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:34:53.472771   70908 certs.go:256] generating profile certs ...
	I0311 21:34:53.472883   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/client.key
	I0311 21:34:53.472954   70908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key.1e888bb1
	I0311 21:34:53.473013   70908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key
	I0311 21:34:53.473143   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:34:53.473185   70908 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:34:53.473198   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:34:53.473237   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:34:53.473272   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:34:53.473307   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:34:53.473363   70908 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:34:53.473988   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:34:53.527429   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:34:53.575908   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:34:53.622438   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:34:53.665366   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 21:34:53.702121   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0311 21:34:53.746066   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:34:53.779151   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/old-k8s-version-239315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 21:34:53.813286   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:34:53.847058   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:34:53.882261   70908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:34:53.912444   70908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:34:53.932592   70908 ssh_runner.go:195] Run: openssl version
	I0311 21:34:53.939200   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:34:53.955630   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960866   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.960920   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:34:53.967258   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:34:53.981075   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:34:53.995065   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000196   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.000272   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:34:54.008574   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:34:54.022782   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:34:54.037409   70908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042893   70908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.042965   70908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:34:54.049497   70908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:34:54.062597   70908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:34:54.067971   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:34:54.074746   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:34:54.081323   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:34:54.088762   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:34:54.095529   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:34:54.102396   70908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:34:54.109553   70908 kubeadm.go:391] StartCluster: {Name:old-k8s-version-239315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:34:54.109639   70908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:34:54.109689   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.152063   70908 cri.go:89] found id: ""
	I0311 21:34:54.152143   70908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:34:54.163988   70908 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:34:54.164005   70908 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:34:54.164011   70908 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:34:54.164050   70908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:34:54.175616   70908 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:34:54.176779   70908 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239315" does not appear in /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:34:54.177542   70908 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-11004/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239315" cluster setting kubeconfig missing "old-k8s-version-239315" context setting]
	I0311 21:34:54.178649   70908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:34:54.180405   70908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:34:54.191864   70908 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.52
	I0311 21:34:54.191891   70908 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:34:54.191903   70908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:34:54.191948   70908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:34:54.233779   70908 cri.go:89] found id: ""
	I0311 21:34:54.233852   70908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:34:54.253672   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:34:54.266010   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:34:54.266038   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:34:54.266085   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:34:54.277867   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:34:54.277918   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:34:54.288984   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:34:54.300133   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:34:54.300197   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:34:54.312090   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.323997   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:34:54.324059   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:34:54.337225   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:34:54.348223   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:34:54.348266   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:34:54.359245   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:34:54.370003   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:54.525972   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.408437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.676995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.819933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:34:55.913736   70908 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:34:55.913811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:55.147500   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:57.148276   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.124678   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125150   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:56.125183   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:56.125101   71893 retry.go:31] will retry after 2.284226205s: waiting for machine to come up
	I0311 21:34:58.412391   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.412973   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:34:58.413002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:34:58.412923   71893 retry.go:31] will retry after 4.532871869s: waiting for machine to come up
	I0311 21:34:57.031683   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:59.032261   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:34:56.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:56.914753   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.413928   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:57.914123   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.413931   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:58.914199   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.414205   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.913880   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:00.914121   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:34:59.148774   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.646997   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:03.647990   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:02.948316   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948762   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | unable to find current IP address of domain default-k8s-diff-port-766430 in network mk-default-k8s-diff-port-766430
	I0311 21:35:02.948790   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | I0311 21:35:02.948704   71893 retry.go:31] will retry after 4.885152649s: waiting for machine to come up
	I0311 21:35:01.529589   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:04.028860   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:01.414003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:01.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.414483   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:02.913977   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.414740   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:03.914735   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.414726   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:04.914846   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.914715   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:05.648516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:08.147744   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:07.835002   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.835551   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Found IP for machine: 192.168.61.11
	I0311 21:35:07.835585   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserving static IP address...
	I0311 21:35:07.835601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has current primary IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.836026   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.836055   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | skip adding static IP to network mk-default-k8s-diff-port-766430 - found existing host DHCP lease matching {name: "default-k8s-diff-port-766430", mac: "52:54:00:41:07:8d", ip: "192.168.61.11"}
	I0311 21:35:07.836075   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Reserved static IP address: 192.168.61.11
	I0311 21:35:07.836110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Getting to WaitForSSH function...
	I0311 21:35:07.836125   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Waiting for SSH to be available...
	I0311 21:35:07.838230   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838601   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.838631   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.838757   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH client type: external
	I0311 21:35:07.838784   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Using SSH private key: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa (-rw-------)
	I0311 21:35:07.838830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0311 21:35:07.838871   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | About to run SSH command:
	I0311 21:35:07.838897   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | exit 0
	I0311 21:35:07.968765   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | SSH cmd err, output: <nil>: 
	I0311 21:35:07.969119   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetConfigRaw
	I0311 21:35:07.969756   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:07.972490   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.972921   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.972949   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.973180   70417 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/config.json ...
	I0311 21:35:07.973362   70417 machine.go:94] provisionDockerMachine start ...
	I0311 21:35:07.973381   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:07.973582   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:07.975926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:07.976277   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:07.976419   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:07.976566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976704   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:07.976847   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:07.976991   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:07.977161   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:07.977171   70417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 21:35:08.093841   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 21:35:08.093864   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094076   70417 buildroot.go:166] provisioning hostname "default-k8s-diff-port-766430"
	I0311 21:35:08.094100   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.094329   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.097134   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097498   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.097528   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.097670   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.097854   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098021   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.098178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.098409   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.098642   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.098657   70417 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-766430 && echo "default-k8s-diff-port-766430" | sudo tee /etc/hostname
	I0311 21:35:08.233860   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-766430
	
	I0311 21:35:08.233890   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.236977   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237387   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.237408   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.237596   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.237791   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.237962   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.238194   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.238359   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.238515   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.238532   70417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-766430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-766430/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-766430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 21:35:08.363393   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 21:35:08.363419   70417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18358-11004/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-11004/.minikube}
	I0311 21:35:08.363471   70417 buildroot.go:174] setting up certificates
	I0311 21:35:08.363484   70417 provision.go:84] configureAuth start
	I0311 21:35:08.363497   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetMachineName
	I0311 21:35:08.363780   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:08.366605   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.366990   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.367012   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.367139   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.369314   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369650   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.369676   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.369798   70417 provision.go:143] copyHostCerts
	I0311 21:35:08.369853   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem, removing ...
	I0311 21:35:08.369863   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem
	I0311 21:35:08.369915   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/ca.pem (1082 bytes)
	I0311 21:35:08.370005   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem, removing ...
	I0311 21:35:08.370013   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem
	I0311 21:35:08.370032   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/cert.pem (1123 bytes)
	I0311 21:35:08.370091   70417 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem, removing ...
	I0311 21:35:08.370098   70417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem
	I0311 21:35:08.370114   70417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-11004/.minikube/key.pem (1675 bytes)
	I0311 21:35:08.370169   70417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-766430 san=[127.0.0.1 192.168.61.11 default-k8s-diff-port-766430 localhost minikube]
	I0311 21:35:08.542469   70417 provision.go:177] copyRemoteCerts
	I0311 21:35:08.542529   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 21:35:08.542550   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.545388   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545750   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.545782   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.545958   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.546115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.546264   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.546360   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:08.635866   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 21:35:08.667490   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0311 21:35:08.697944   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 21:35:08.726836   70417 provision.go:87] duration metric: took 363.34159ms to configureAuth
	I0311 21:35:08.726860   70417 buildroot.go:189] setting minikube options for container-runtime
	I0311 21:35:08.727033   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:35:08.727115   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:08.730050   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730458   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:08.730489   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:08.730788   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:08.730987   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731170   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:08.731317   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:08.731466   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:08.731607   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:08.731629   70417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0311 21:35:09.035100   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0311 21:35:09.035129   70417 machine.go:97] duration metric: took 1.061753229s to provisionDockerMachine
	I0311 21:35:09.035142   70417 start.go:293] postStartSetup for "default-k8s-diff-port-766430" (driver="kvm2")
	I0311 21:35:09.035151   70417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 21:35:09.035165   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.035458   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 21:35:09.035484   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.038340   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038638   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.038668   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.038829   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.039027   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.039178   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.039343   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.133013   70417 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 21:35:09.138043   70417 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 21:35:09.138065   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/addons for local assets ...
	I0311 21:35:09.138166   70417 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-11004/.minikube/files for local assets ...
	I0311 21:35:09.138259   70417 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem -> 182352.pem in /etc/ssl/certs
	I0311 21:35:09.138364   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 21:35:09.149527   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:09.176424   70417 start.go:296] duration metric: took 141.271199ms for postStartSetup
	I0311 21:35:09.176460   70417 fix.go:56] duration metric: took 24.15021813s for fixHost
	I0311 21:35:09.176479   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.179447   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.179830   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.179859   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.180147   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.180402   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180566   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.180758   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.180974   70417 main.go:141] libmachine: Using SSH client type: native
	I0311 21:35:09.181186   70417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d2e0] 0x830040 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I0311 21:35:09.181200   70417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 21:35:09.297740   70417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710192909.282566583
	
	I0311 21:35:09.297764   70417 fix.go:216] guest clock: 1710192909.282566583
	I0311 21:35:09.297773   70417 fix.go:229] Guest: 2024-03-11 21:35:09.282566583 +0000 UTC Remote: 2024-03-11 21:35:09.176465496 +0000 UTC m=+364.839103648 (delta=106.101087ms)
	I0311 21:35:09.297795   70417 fix.go:200] guest clock delta is within tolerance: 106.101087ms
	I0311 21:35:09.297802   70417 start.go:83] releasing machines lock for "default-k8s-diff-port-766430", held for 24.271590337s
	I0311 21:35:09.297825   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.298067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:09.300989   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301399   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.301422   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.301604   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302091   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302291   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:35:09.302385   70417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 21:35:09.302433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.302490   70417 ssh_runner.go:195] Run: cat /version.json
	I0311 21:35:09.302515   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:35:09.305403   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305572   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305802   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.305831   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.305912   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306042   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306067   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:09.306223   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:35:09.306430   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:09.306511   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:35:09.306645   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:35:09.306772   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:35:06.528726   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.029055   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:09.419852   70417 ssh_runner.go:195] Run: systemctl --version
	I0311 21:35:09.427141   70417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0311 21:35:09.579321   70417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 21:35:09.586396   70417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 21:35:09.586470   70417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 21:35:09.606617   70417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 21:35:09.606639   70417 start.go:494] detecting cgroup driver to use...
	I0311 21:35:09.606705   70417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 21:35:09.627066   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 21:35:09.646091   70417 docker.go:217] disabling cri-docker service (if available) ...
	I0311 21:35:09.646151   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 21:35:09.662307   70417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 21:35:09.679793   70417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 21:35:09.828827   70417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 21:35:09.984773   70417 docker.go:233] disabling docker service ...
	I0311 21:35:09.984843   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 21:35:10.003968   70417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 21:35:10.018609   70417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 21:35:10.174297   70417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 21:35:10.316762   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 21:35:10.338008   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 21:35:10.359320   70417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0311 21:35:10.359374   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.371953   70417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0311 21:35:10.372008   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.384823   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.397305   70417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0311 21:35:10.409521   70417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 21:35:10.424714   70417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 21:35:10.438470   70417 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0311 21:35:10.438529   70417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0311 21:35:10.454436   70417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 21:35:10.465004   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:10.611379   70417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0311 21:35:10.786860   70417 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0311 21:35:10.786959   70417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0311 21:35:10.792496   70417 start.go:562] Will wait 60s for crictl version
	I0311 21:35:10.792551   70417 ssh_runner.go:195] Run: which crictl
	I0311 21:35:10.797079   70417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 21:35:10.837010   70417 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0311 21:35:10.837086   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.868308   70417 ssh_runner.go:195] Run: crio --version
	I0311 21:35:10.900087   70417 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0311 21:35:06.414389   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:06.914233   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.414565   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:07.914773   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.414348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:08.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.414822   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:09.914743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.413987   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.914698   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:10.150688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:12.648444   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:10.901304   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetIP
	I0311 21:35:10.904103   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904380   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:35:10.904407   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:35:10.904557   70417 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0311 21:35:10.909585   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:10.924163   70417 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 21:35:10.924311   70417 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0311 21:35:10.924408   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:10.969555   70417 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0311 21:35:10.969623   70417 ssh_runner.go:195] Run: which lz4
	I0311 21:35:10.974054   70417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 21:35:10.978776   70417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 21:35:10.978811   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0311 21:35:12.893346   70417 crio.go:444] duration metric: took 1.91931676s to copy over tarball
	I0311 21:35:12.893421   70417 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 21:35:11.031301   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:13.527896   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:11.414320   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:11.914003   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.414529   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:12.914476   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.414282   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:13.914426   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.414521   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.914001   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.414839   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.913921   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:14.648625   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.148688   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:15.772070   70417 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.878627154s)
	I0311 21:35:15.772094   70417 crio.go:451] duration metric: took 2.878719213s to extract the tarball
	I0311 21:35:15.772101   70417 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 21:35:15.818581   70417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 21:35:15.872635   70417 crio.go:496] all images are preloaded for cri-o runtime.
	I0311 21:35:15.872658   70417 cache_images.go:84] Images are preloaded, skipping loading
	I0311 21:35:15.872667   70417 kubeadm.go:928] updating node { 192.168.61.11 8444 v1.28.4 crio true true} ...
	I0311 21:35:15.872823   70417 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-766430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 21:35:15.872933   70417 ssh_runner.go:195] Run: crio config
	I0311 21:35:15.928776   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:15.928803   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:15.928818   70417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 21:35:15.928843   70417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-766430 NodeName:default-k8s-diff-port-766430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 21:35:15.929018   70417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-766430"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 21:35:15.929090   70417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 21:35:15.941853   70417 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 21:35:15.941908   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 21:35:15.954936   70417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0311 21:35:15.975236   70417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 21:35:15.994509   70417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0311 21:35:16.014058   70417 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I0311 21:35:16.018972   70417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 21:35:16.035169   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:35:16.160453   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:35:16.182252   70417 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430 for IP: 192.168.61.11
	I0311 21:35:16.182272   70417 certs.go:194] generating shared ca certs ...
	I0311 21:35:16.182286   70417 certs.go:226] acquiring lock for ca certs: {Name:mkc1162dd2fd565881b28a047e5f480cda50fd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:35:16.182419   70417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key
	I0311 21:35:16.182465   70417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key
	I0311 21:35:16.182475   70417 certs.go:256] generating profile certs ...
	I0311 21:35:16.182545   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/client.key
	I0311 21:35:16.182601   70417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key.2c00376c
	I0311 21:35:16.182635   70417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key
	I0311 21:35:16.182754   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem (1338 bytes)
	W0311 21:35:16.182783   70417 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235_empty.pem, impossibly tiny 0 bytes
	I0311 21:35:16.182789   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 21:35:16.182823   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/ca.pem (1082 bytes)
	I0311 21:35:16.182844   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/cert.pem (1123 bytes)
	I0311 21:35:16.182867   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/certs/key.pem (1675 bytes)
	I0311 21:35:16.182901   70417 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem (1708 bytes)
	I0311 21:35:16.183517   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 21:35:16.231409   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 21:35:16.277004   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 21:35:16.315346   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 21:35:16.352697   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0311 21:35:16.388570   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 21:35:16.422830   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 21:35:16.452562   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/default-k8s-diff-port-766430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 21:35:16.480976   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 21:35:16.507149   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/certs/18235.pem --> /usr/share/ca-certificates/18235.pem (1338 bytes)
	I0311 21:35:16.535832   70417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/ssl/certs/182352.pem --> /usr/share/ca-certificates/182352.pem (1708 bytes)
	I0311 21:35:16.566697   70417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 21:35:16.587454   70417 ssh_runner.go:195] Run: openssl version
	I0311 21:35:16.593880   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 21:35:16.608197   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613604   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.613673   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 21:35:16.620156   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 21:35:16.632634   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18235.pem && ln -fs /usr/share/ca-certificates/18235.pem /etc/ssl/certs/18235.pem"
	I0311 21:35:16.646047   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652530   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:19 /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.652591   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18235.pem
	I0311 21:35:16.660480   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18235.pem /etc/ssl/certs/51391683.0"
	I0311 21:35:16.673572   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/182352.pem && ln -fs /usr/share/ca-certificates/182352.pem /etc/ssl/certs/182352.pem"
	I0311 21:35:16.687161   70417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692589   70417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:19 /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.692632   70417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/182352.pem
	I0311 21:35:16.705471   70417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/182352.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 21:35:16.718251   70417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 21:35:16.723979   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 21:35:16.731335   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 21:35:16.738485   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 21:35:16.745489   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 21:35:16.752295   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 21:35:16.759251   70417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 21:35:16.766128   70417 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-766430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-766430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 21:35:16.766237   70417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0311 21:35:16.766292   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.806418   70417 cri.go:89] found id: ""
	I0311 21:35:16.806478   70417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 21:35:16.821434   70417 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 21:35:16.821455   70417 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 21:35:16.821462   70417 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 21:35:16.821514   70417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 21:35:16.835457   70417 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 21:35:16.836764   70417 kubeconfig.go:125] found "default-k8s-diff-port-766430" server: "https://192.168.61.11:8444"
	I0311 21:35:16.839163   70417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 21:35:16.850037   70417 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.11
	I0311 21:35:16.850065   70417 kubeadm.go:1153] stopping kube-system containers ...
	I0311 21:35:16.850074   70417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0311 21:35:16.850117   70417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 21:35:16.895532   70417 cri.go:89] found id: ""
	I0311 21:35:16.895612   70417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 21:35:16.913151   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:35:16.927989   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:35:16.928014   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:35:16.928073   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:35:16.939803   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:35:16.939849   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:35:16.950103   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:35:16.960164   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:35:16.960213   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:35:16.970349   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.980056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:35:16.980098   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:35:16.990189   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:35:16.999799   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:35:16.999874   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:35:17.010502   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:35:17.021106   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:17.136170   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.044684   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.296278   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.376702   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:18.473740   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:35:18.473840   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.974894   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:15.529099   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:17.755777   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:20.028341   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:16.414018   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:16.914685   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.414894   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:17.914319   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.414875   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:18.914338   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.414496   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.914396   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.414731   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:20.914149   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.648967   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:22.148024   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:19.474609   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:19.499907   70417 api_server.go:72] duration metric: took 1.026169594s to wait for apiserver process to appear ...
	I0311 21:35:19.499931   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:35:19.499951   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:19.500566   70417 api_server.go:269] stopped: https://192.168.61.11:8444/healthz: Get "https://192.168.61.11:8444/healthz": dial tcp 192.168.61.11:8444: connect: connection refused
	I0311 21:35:20.000807   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.693958   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0311 21:35:22.693991   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0311 21:35:22.694006   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:22.772747   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:22.772792   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.000004   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.004763   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.004805   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:23.500112   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:23.507209   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:23.507236   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.000861   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.006793   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0311 21:35:24.006830   70417 api_server.go:103] status: https://192.168.61.11:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0311 21:35:24.500264   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:35:24.508242   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:35:24.520230   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:35:24.520255   70417 api_server.go:131] duration metric: took 5.020318338s to wait for apiserver health ...
	I0311 21:35:24.520285   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:35:24.520291   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:35:24.522151   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:35:22.029963   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.530052   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:21.414126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:21.914012   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.414680   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:22.914766   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.414478   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:23.914770   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.414370   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.914772   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.413991   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:25.914516   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:24.149179   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.647134   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.647725   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:24.523964   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:35:24.538536   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:35:24.583279   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:35:24.594703   70417 system_pods.go:59] 8 kube-system pods found
	I0311 21:35:24.594730   70417 system_pods.go:61] "coredns-5dd5756b68-pkn9d" [ee4de3f7-1044-4dc9-91dc-d9b23493b0bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:35:24.594737   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [96b9327c-f97d-463f-9d1e-3210b4032aab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0311 21:35:24.594751   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [fc650f48-2e28-4219-8571-8b6c43891eb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0311 21:35:24.594763   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [c7cc5d40-ad56-4132-ab81-3422ffe1d5b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0311 21:35:24.594772   70417 system_pods.go:61] "kube-proxy-cggzr" [f6b7fe4e-7d57-4604-b63d-f9890826b659] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:35:24.594784   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [8a156fec-b2f3-46e8-bf0d-0bf291ef8783] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0311 21:35:24.594795   70417 system_pods.go:61] "metrics-server-57f55c9bc5-kxl6n" [ac62700b-a39a-480e-841e-852bf3c66e7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:35:24.594805   70417 system_pods.go:61] "storage-provisioner" [a0b03582-0d90-4a7f-919c-0552046edcb5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:35:24.594821   70417 system_pods.go:74] duration metric: took 11.523907ms to wait for pod list to return data ...
	I0311 21:35:24.594830   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:35:24.606500   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:35:24.606529   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:35:24.606546   70417 node_conditions.go:105] duration metric: took 11.711241ms to run NodePressure ...
	I0311 21:35:24.606565   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 21:35:24.893361   70417 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899200   70417 kubeadm.go:733] kubelet initialised
	I0311 21:35:24.899225   70417 kubeadm.go:734] duration metric: took 5.837351ms waiting for restarted kubelet to initialise ...
	I0311 21:35:24.899235   70417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:35:24.905858   70417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:26.912640   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:28.916566   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:27.029381   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:29.529565   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:26.414267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:26.914876   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.414469   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:27.914513   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.414924   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:28.914126   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.414526   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:29.914039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.414305   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:30.914438   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.147527   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:33.147694   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.413246   70417 pod_ready.go:102] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.912878   70417 pod_ready.go:92] pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:31.912899   70417 pod_ready.go:81] duration metric: took 7.007017714s for pod "coredns-5dd5756b68-pkn9d" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:31.912908   70417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:33.977091   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:32.029295   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:34.529021   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:31.414610   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:31.914472   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.414158   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:32.914169   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.414745   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:33.914820   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.414071   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:34.914228   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.414135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.914695   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:35.148058   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:37.648200   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.422565   70417 pod_ready.go:102] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.921304   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.921328   70417 pod_ready.go:81] duration metric: took 5.008411943s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.921340   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927268   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.927284   70417 pod_ready.go:81] duration metric: took 5.936969ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.927292   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932540   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.932563   70417 pod_ready.go:81] duration metric: took 5.264737ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.932575   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937456   70417 pod_ready.go:92] pod "kube-proxy-cggzr" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.937473   70417 pod_ready.go:81] duration metric: took 4.892276ms for pod "kube-proxy-cggzr" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.937480   70417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942372   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:35:36.942390   70417 pod_ready.go:81] duration metric: took 4.902792ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:36.942401   70417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	I0311 21:35:38.949452   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.531316   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:39.030491   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:36.414435   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:36.914157   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.414539   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:37.914811   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.414070   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:38.914303   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.413935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:39.914135   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.414569   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.914106   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:40.147355   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.148353   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:40.950204   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:42.950335   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.528874   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:43.530140   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:41.414404   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:41.914323   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.414215   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:42.914566   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.414671   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:43.914658   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.414703   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.913966   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.414045   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:45.914260   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:44.648282   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.148247   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:45.449963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:47.451576   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.029164   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:48.529137   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:46.414016   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:46.914821   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.414210   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:47.914008   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.413884   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:48.914160   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.414877   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.914379   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.414293   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:50.913867   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:49.148585   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.648372   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:49.949667   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.950874   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.953067   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:50.529616   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:53.030586   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:51.414582   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:51.914453   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.414668   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:52.914816   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.414768   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:53.914592   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.414743   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:54.914307   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.414000   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:55.914553   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:55.914636   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:55.957434   70908 cri.go:89] found id: ""
	I0311 21:35:55.957459   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.957470   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:55.957477   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:55.957545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:55.995255   70908 cri.go:89] found id: ""
	I0311 21:35:55.995279   70908 logs.go:276] 0 containers: []
	W0311 21:35:55.995290   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:55.995305   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:55.995364   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:56.038893   70908 cri.go:89] found id: ""
	I0311 21:35:56.038916   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.038926   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:56.038933   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:56.038990   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:54.147165   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.148641   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.647841   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.451057   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.950421   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:55.528922   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:58.029209   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:00.029912   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:35:56.081497   70908 cri.go:89] found id: ""
	I0311 21:35:56.081517   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.081528   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:56.081534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:56.081591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:56.120047   70908 cri.go:89] found id: ""
	I0311 21:35:56.120071   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.120079   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:56.120084   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:56.120156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:56.157350   70908 cri.go:89] found id: ""
	I0311 21:35:56.157370   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.157377   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:56.157382   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:56.157433   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:56.198324   70908 cri.go:89] found id: ""
	I0311 21:35:56.198354   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.198374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:56.198381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:56.198437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:56.236579   70908 cri.go:89] found id: ""
	I0311 21:35:56.236608   70908 logs.go:276] 0 containers: []
	W0311 21:35:56.236619   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:56.236691   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:56.236712   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:56.377789   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:35:56.377809   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:56.377825   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:56.449765   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:56.449807   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:56.502417   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:56.502448   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:56.557205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:56.557241   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.073411   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:35:59.088205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:35:59.088287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:35:59.126458   70908 cri.go:89] found id: ""
	I0311 21:35:59.126486   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.126494   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:35:59.126499   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:35:59.126555   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:35:59.197887   70908 cri.go:89] found id: ""
	I0311 21:35:59.197911   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.197919   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:35:59.197924   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:35:59.197967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:35:59.239523   70908 cri.go:89] found id: ""
	I0311 21:35:59.239552   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.239562   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:35:59.239570   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:35:59.239642   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:35:59.280903   70908 cri.go:89] found id: ""
	I0311 21:35:59.280930   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.280940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:35:59.280947   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:35:59.281024   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:35:59.320218   70908 cri.go:89] found id: ""
	I0311 21:35:59.320242   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.320254   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:35:59.320260   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:35:59.320314   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:35:59.361235   70908 cri.go:89] found id: ""
	I0311 21:35:59.361265   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.361276   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:35:59.361283   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:35:59.361352   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:35:59.409477   70908 cri.go:89] found id: ""
	I0311 21:35:59.409503   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.409514   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:35:59.409522   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:35:59.409568   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:35:59.454704   70908 cri.go:89] found id: ""
	I0311 21:35:59.454728   70908 logs.go:276] 0 containers: []
	W0311 21:35:59.454739   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:35:59.454748   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:35:59.454767   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:35:59.525839   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:35:59.525864   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:35:59.569577   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:35:59.569606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:35:59.628402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:35:59.628437   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:35:59.647181   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:35:59.647208   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:35:59.731300   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:00.650515   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.146560   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:01.449702   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:03.950341   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.030569   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:04.529453   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:02.232458   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:02.246948   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:02.247025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:02.290561   70908 cri.go:89] found id: ""
	I0311 21:36:02.290588   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.290599   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:02.290605   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:02.290659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:02.333788   70908 cri.go:89] found id: ""
	I0311 21:36:02.333814   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.333821   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:02.333826   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:02.333877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:02.375774   70908 cri.go:89] found id: ""
	I0311 21:36:02.375798   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.375806   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:02.375812   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:02.375862   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:02.414741   70908 cri.go:89] found id: ""
	I0311 21:36:02.414781   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.414803   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:02.414810   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:02.414875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:02.456637   70908 cri.go:89] found id: ""
	I0311 21:36:02.456660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.456670   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:02.456677   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:02.456759   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:02.494633   70908 cri.go:89] found id: ""
	I0311 21:36:02.494660   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.494670   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:02.494678   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:02.494738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:02.536187   70908 cri.go:89] found id: ""
	I0311 21:36:02.536212   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.536223   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:02.536230   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:02.536291   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:02.574933   70908 cri.go:89] found id: ""
	I0311 21:36:02.574962   70908 logs.go:276] 0 containers: []
	W0311 21:36:02.574973   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:02.574985   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:02.575001   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:02.656610   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:02.656637   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:02.656653   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:02.730514   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:02.730548   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:02.776009   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:02.776041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:02.829792   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:02.829826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.345568   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:05.360082   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:05.360164   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:05.406106   70908 cri.go:89] found id: ""
	I0311 21:36:05.406131   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.406141   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:05.406147   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:05.406203   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:05.449584   70908 cri.go:89] found id: ""
	I0311 21:36:05.449608   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.449617   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:05.449624   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:05.449680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:05.493869   70908 cri.go:89] found id: ""
	I0311 21:36:05.493898   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.493912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:05.493928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:05.493994   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:05.563506   70908 cri.go:89] found id: ""
	I0311 21:36:05.563532   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.563542   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:05.563549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:05.563600   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:05.630140   70908 cri.go:89] found id: ""
	I0311 21:36:05.630165   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.630172   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:05.630177   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:05.630230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:05.675584   70908 cri.go:89] found id: ""
	I0311 21:36:05.675612   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.675623   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:05.675631   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:05.675689   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:05.720521   70908 cri.go:89] found id: ""
	I0311 21:36:05.720548   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.720557   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:05.720563   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:05.720615   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:05.759323   70908 cri.go:89] found id: ""
	I0311 21:36:05.759351   70908 logs.go:276] 0 containers: []
	W0311 21:36:05.759359   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:05.759367   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:05.759379   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:05.801024   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:05.801050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:05.856330   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:05.856356   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:05.871299   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:05.871324   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:05.950218   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:05.950245   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:05.950259   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:05.148227   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.647389   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:05.950833   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.449548   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:07.028964   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:09.029396   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:08.535502   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:08.552152   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:08.552220   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:08.596602   70908 cri.go:89] found id: ""
	I0311 21:36:08.596707   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.596731   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:08.596755   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:08.596820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:08.641091   70908 cri.go:89] found id: ""
	I0311 21:36:08.641119   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.641130   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:08.641137   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:08.641198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:08.684466   70908 cri.go:89] found id: ""
	I0311 21:36:08.684494   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.684503   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:08.684510   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:08.684570   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:08.730899   70908 cri.go:89] found id: ""
	I0311 21:36:08.730924   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.730931   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:08.730937   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:08.730997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:08.775293   70908 cri.go:89] found id: ""
	I0311 21:36:08.775317   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.775324   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:08.775330   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:08.775387   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:08.816098   70908 cri.go:89] found id: ""
	I0311 21:36:08.816126   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.816137   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:08.816144   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:08.816207   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:08.857413   70908 cri.go:89] found id: ""
	I0311 21:36:08.857449   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.857460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:08.857476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:08.857541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:08.898252   70908 cri.go:89] found id: ""
	I0311 21:36:08.898283   70908 logs.go:276] 0 containers: []
	W0311 21:36:08.898293   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:08.898302   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:08.898313   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:08.955162   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:08.955188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:08.970234   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:08.970258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:09.055025   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:09.055043   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:09.055055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:09.140345   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:09.140376   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:10.148323   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.647037   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:10.450796   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:12.450839   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.529842   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.029706   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:11.681542   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:11.697407   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:11.697481   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:11.740239   70908 cri.go:89] found id: ""
	I0311 21:36:11.740264   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.740274   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:11.740280   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:11.740336   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:11.777625   70908 cri.go:89] found id: ""
	I0311 21:36:11.777655   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.777667   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:11.777674   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:11.777745   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:11.817202   70908 cri.go:89] found id: ""
	I0311 21:36:11.817226   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.817233   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:11.817239   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:11.817306   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:11.858912   70908 cri.go:89] found id: ""
	I0311 21:36:11.858933   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.858940   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:11.858945   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:11.858998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:11.897841   70908 cri.go:89] found id: ""
	I0311 21:36:11.897876   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.897887   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:11.897895   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:11.897955   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:11.936181   70908 cri.go:89] found id: ""
	I0311 21:36:11.936207   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.936218   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:11.936226   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:11.936293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:11.981882   70908 cri.go:89] found id: ""
	I0311 21:36:11.981905   70908 logs.go:276] 0 containers: []
	W0311 21:36:11.981915   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:11.981922   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:11.981982   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:12.022270   70908 cri.go:89] found id: ""
	I0311 21:36:12.022298   70908 logs.go:276] 0 containers: []
	W0311 21:36:12.022309   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:12.022320   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:12.022333   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:12.074640   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:12.074668   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:12.089854   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:12.089879   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:12.179578   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:12.179595   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:12.179606   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:12.263249   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:12.263285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:14.811547   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:14.827075   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:14.827175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:14.870512   70908 cri.go:89] found id: ""
	I0311 21:36:14.870544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.870555   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:14.870563   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:14.870625   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:14.908521   70908 cri.go:89] found id: ""
	I0311 21:36:14.908544   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.908553   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:14.908558   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:14.908607   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:14.951702   70908 cri.go:89] found id: ""
	I0311 21:36:14.951729   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.951739   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:14.951746   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:14.951805   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:14.992590   70908 cri.go:89] found id: ""
	I0311 21:36:14.992618   70908 logs.go:276] 0 containers: []
	W0311 21:36:14.992630   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:14.992638   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:14.992698   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:15.034535   70908 cri.go:89] found id: ""
	I0311 21:36:15.034556   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.034563   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:15.034569   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:15.034614   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:15.077175   70908 cri.go:89] found id: ""
	I0311 21:36:15.077200   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.077210   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:15.077218   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:15.077283   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:15.121500   70908 cri.go:89] found id: ""
	I0311 21:36:15.121530   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.121541   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:15.121549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:15.121655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:15.162712   70908 cri.go:89] found id: ""
	I0311 21:36:15.162738   70908 logs.go:276] 0 containers: []
	W0311 21:36:15.162748   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:15.162757   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:15.162776   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:15.241469   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:15.241488   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:15.241499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:15.322257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:15.322291   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:15.368258   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:15.368285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:15.427131   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:15.427163   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:14.648776   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.148710   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:14.452948   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.949085   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.950111   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:16.030409   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:18.529122   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:17.944348   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:17.958629   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:17.958704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:17.995869   70908 cri.go:89] found id: ""
	I0311 21:36:17.995895   70908 logs.go:276] 0 containers: []
	W0311 21:36:17.995904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:17.995914   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:17.995976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:18.032273   70908 cri.go:89] found id: ""
	I0311 21:36:18.032300   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.032308   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:18.032313   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:18.032361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:18.072497   70908 cri.go:89] found id: ""
	I0311 21:36:18.072519   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.072526   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:18.072532   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:18.072578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:18.110091   70908 cri.go:89] found id: ""
	I0311 21:36:18.110119   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.110129   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:18.110136   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:18.110199   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:18.152217   70908 cri.go:89] found id: ""
	I0311 21:36:18.152261   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.152272   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:18.152280   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:18.152347   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:18.193957   70908 cri.go:89] found id: ""
	I0311 21:36:18.193989   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.194000   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:18.194008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:18.194086   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:18.231828   70908 cri.go:89] found id: ""
	I0311 21:36:18.231861   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.231873   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:18.231880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:18.231939   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:18.271862   70908 cri.go:89] found id: ""
	I0311 21:36:18.271896   70908 logs.go:276] 0 containers: []
	W0311 21:36:18.271907   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:18.271917   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:18.271933   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:18.325405   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:18.325440   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:18.344560   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:18.344593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:18.425051   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:18.425075   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:18.425093   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:18.513247   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:18.513287   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:19.646758   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.647702   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.649318   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.450692   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.950088   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.028812   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:23.029828   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:21.060499   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:21.076648   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:21.076716   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:21.117270   70908 cri.go:89] found id: ""
	I0311 21:36:21.117298   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.117309   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:21.117317   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:21.117388   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:21.159005   70908 cri.go:89] found id: ""
	I0311 21:36:21.159045   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.159056   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:21.159063   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:21.159122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:21.196576   70908 cri.go:89] found id: ""
	I0311 21:36:21.196599   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.196609   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:21.196617   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:21.196677   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:21.237689   70908 cri.go:89] found id: ""
	I0311 21:36:21.237718   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.237729   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:21.237734   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:21.237783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:21.280662   70908 cri.go:89] found id: ""
	I0311 21:36:21.280696   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.280707   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:21.280714   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:21.280795   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:21.321475   70908 cri.go:89] found id: ""
	I0311 21:36:21.321501   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.321511   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:21.321518   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:21.321581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:21.365186   70908 cri.go:89] found id: ""
	I0311 21:36:21.365209   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.365216   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:21.365221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:21.365276   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:21.408678   70908 cri.go:89] found id: ""
	I0311 21:36:21.408713   70908 logs.go:276] 0 containers: []
	W0311 21:36:21.408725   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:21.408754   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:21.408771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:21.466635   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:21.466663   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:21.482596   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:21.482622   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:21.556750   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:21.556769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:21.556780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:21.643095   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:21.643126   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.195112   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:24.208829   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:24.208895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:24.245956   70908 cri.go:89] found id: ""
	I0311 21:36:24.245981   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.245989   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:24.245995   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:24.246053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:24.289740   70908 cri.go:89] found id: ""
	I0311 21:36:24.289766   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.289778   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:24.289784   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:24.289846   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:24.336911   70908 cri.go:89] found id: ""
	I0311 21:36:24.336963   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.336977   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:24.336986   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:24.337057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:24.381715   70908 cri.go:89] found id: ""
	I0311 21:36:24.381739   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.381753   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:24.381761   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:24.381817   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:24.423759   70908 cri.go:89] found id: ""
	I0311 21:36:24.423787   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.423797   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:24.423805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:24.423882   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:24.468903   70908 cri.go:89] found id: ""
	I0311 21:36:24.468931   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.468946   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:24.468954   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:24.469013   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:24.509602   70908 cri.go:89] found id: ""
	I0311 21:36:24.509629   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.509639   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:24.509646   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:24.509706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:24.551483   70908 cri.go:89] found id: ""
	I0311 21:36:24.551511   70908 logs.go:276] 0 containers: []
	W0311 21:36:24.551522   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:24.551532   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:24.551545   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:24.567123   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:24.567154   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:24.644215   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:24.644247   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:24.644262   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:24.726438   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:24.726469   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:24.779567   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:24.779596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:26.146823   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.148291   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:26.450637   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:28.949850   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:25.528542   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.529375   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:29.529701   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:27.337785   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:27.352504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:27.352578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:27.395787   70908 cri.go:89] found id: ""
	I0311 21:36:27.395809   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.395817   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:27.395823   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:27.395869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:27.441800   70908 cri.go:89] found id: ""
	I0311 21:36:27.441826   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.441834   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:27.441839   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:27.441893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:27.481761   70908 cri.go:89] found id: ""
	I0311 21:36:27.481791   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.481802   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:27.481809   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:27.481868   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:27.526981   70908 cri.go:89] found id: ""
	I0311 21:36:27.527011   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.527029   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:27.527037   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:27.527130   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:27.566569   70908 cri.go:89] found id: ""
	I0311 21:36:27.566602   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.566614   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:27.566622   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:27.566682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:27.607434   70908 cri.go:89] found id: ""
	I0311 21:36:27.607456   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.607464   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:27.607469   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:27.607529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:27.652648   70908 cri.go:89] found id: ""
	I0311 21:36:27.652674   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.652681   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:27.652686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:27.652756   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:27.691105   70908 cri.go:89] found id: ""
	I0311 21:36:27.691136   70908 logs.go:276] 0 containers: []
	W0311 21:36:27.691148   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:27.691158   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:27.691173   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:27.706451   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:27.706477   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:27.788935   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:27.788959   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:27.788975   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:27.875721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:27.875758   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:27.927920   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:27.927951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.487728   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:30.503425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:30.503508   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:30.550846   70908 cri.go:89] found id: ""
	I0311 21:36:30.550868   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.550875   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:30.550881   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:30.550928   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:30.586886   70908 cri.go:89] found id: ""
	I0311 21:36:30.586915   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.586925   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:30.586934   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:30.586991   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:30.627849   70908 cri.go:89] found id: ""
	I0311 21:36:30.627884   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.627895   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:30.627902   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:30.627965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:30.669188   70908 cri.go:89] found id: ""
	I0311 21:36:30.669209   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.669216   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:30.669222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:30.669266   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:30.711676   70908 cri.go:89] found id: ""
	I0311 21:36:30.711697   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.711705   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:30.711710   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:30.711758   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:30.754218   70908 cri.go:89] found id: ""
	I0311 21:36:30.754240   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.754248   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:30.754253   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:30.754299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:30.791224   70908 cri.go:89] found id: ""
	I0311 21:36:30.791255   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.791263   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:30.791269   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:30.791328   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:30.831263   70908 cri.go:89] found id: ""
	I0311 21:36:30.831291   70908 logs.go:276] 0 containers: []
	W0311 21:36:30.831301   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:30.831311   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:30.831326   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:30.876574   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:30.876600   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:30.928483   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:30.928509   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:30.944642   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:30.944665   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:31.026406   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:31.026428   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:31.026444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:30.648859   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.147907   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:30.952483   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.451714   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:32.028484   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:34.028948   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:33.611104   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:33.625644   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:33.625706   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:33.664787   70908 cri.go:89] found id: ""
	I0311 21:36:33.664816   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.664825   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:33.664830   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:33.664894   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:33.704636   70908 cri.go:89] found id: ""
	I0311 21:36:33.704659   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.704666   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:33.704672   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:33.704717   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:33.744797   70908 cri.go:89] found id: ""
	I0311 21:36:33.744837   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.744848   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:33.744855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:33.744917   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:33.787435   70908 cri.go:89] found id: ""
	I0311 21:36:33.787464   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.787474   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:33.787482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:33.787541   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:33.826578   70908 cri.go:89] found id: ""
	I0311 21:36:33.826606   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.826617   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:33.826624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:33.826684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:33.864854   70908 cri.go:89] found id: ""
	I0311 21:36:33.864875   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.864882   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:33.864887   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:33.864934   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:33.905366   70908 cri.go:89] found id: ""
	I0311 21:36:33.905397   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.905409   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:33.905416   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:33.905477   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:33.950196   70908 cri.go:89] found id: ""
	I0311 21:36:33.950222   70908 logs.go:276] 0 containers: []
	W0311 21:36:33.950232   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:33.950243   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:33.950258   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:34.001016   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:34.001049   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:34.059102   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:34.059131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:34.075879   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:34.075908   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:34.177114   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:34.177138   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:34.177161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:35.647611   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.147941   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:35.950147   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.449090   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.030072   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:38.527952   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:36.756459   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:36.772781   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:36.772867   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:36.820076   70908 cri.go:89] found id: ""
	I0311 21:36:36.820103   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.820111   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:36.820118   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:36.820169   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:36.859279   70908 cri.go:89] found id: ""
	I0311 21:36:36.859306   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.859317   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:36.859324   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:36.859383   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:36.899669   70908 cri.go:89] found id: ""
	I0311 21:36:36.899694   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.899705   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:36.899712   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:36.899770   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:36.938826   70908 cri.go:89] found id: ""
	I0311 21:36:36.938853   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.938864   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:36.938872   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:36.938957   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:36.976659   70908 cri.go:89] found id: ""
	I0311 21:36:36.976685   70908 logs.go:276] 0 containers: []
	W0311 21:36:36.976693   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:36.976703   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:36.976772   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:37.015439   70908 cri.go:89] found id: ""
	I0311 21:36:37.015462   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.015469   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:37.015474   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:37.015519   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:37.057469   70908 cri.go:89] found id: ""
	I0311 21:36:37.057496   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.057507   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:37.057514   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:37.057579   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:37.106287   70908 cri.go:89] found id: ""
	I0311 21:36:37.106316   70908 logs.go:276] 0 containers: []
	W0311 21:36:37.106325   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:37.106335   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:37.106352   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:37.122333   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:37.122367   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:37.197708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:37.197731   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:37.197742   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:37.281911   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:37.281944   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:37.335978   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:37.336011   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:39.891583   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:39.914741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:39.914823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:39.955751   70908 cri.go:89] found id: ""
	I0311 21:36:39.955773   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.955781   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:39.955786   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:39.955837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:39.997604   70908 cri.go:89] found id: ""
	I0311 21:36:39.997632   70908 logs.go:276] 0 containers: []
	W0311 21:36:39.997642   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:39.997649   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:39.997711   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:40.039138   70908 cri.go:89] found id: ""
	I0311 21:36:40.039168   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.039178   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:40.039186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:40.039230   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:40.079906   70908 cri.go:89] found id: ""
	I0311 21:36:40.079934   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.079945   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:40.079952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:40.080017   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:40.124116   70908 cri.go:89] found id: ""
	I0311 21:36:40.124141   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.124152   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:40.124159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:40.124221   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:40.165078   70908 cri.go:89] found id: ""
	I0311 21:36:40.165099   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.165108   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:40.165113   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:40.165158   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:40.203928   70908 cri.go:89] found id: ""
	I0311 21:36:40.203954   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.203962   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:40.203971   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:40.204018   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:40.244755   70908 cri.go:89] found id: ""
	I0311 21:36:40.244783   70908 logs.go:276] 0 containers: []
	W0311 21:36:40.244793   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:40.244803   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:40.244819   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:40.302090   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:40.302125   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:40.318071   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:40.318097   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:40.405336   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:40.405363   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:40.405378   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:40.493262   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:40.493298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:40.148095   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.651483   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.449200   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.450259   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:40.528526   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:42.533619   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:45.029285   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:43.052419   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:43.068300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:43.068378   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:43.109665   70908 cri.go:89] found id: ""
	I0311 21:36:43.109701   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.109717   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:43.109725   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:43.109789   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:43.152233   70908 cri.go:89] found id: ""
	I0311 21:36:43.152253   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.152260   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:43.152265   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:43.152311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:43.194969   70908 cri.go:89] found id: ""
	I0311 21:36:43.194995   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.195002   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:43.195008   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:43.195056   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:43.234555   70908 cri.go:89] found id: ""
	I0311 21:36:43.234581   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.234592   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:43.234597   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:43.234651   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:43.275188   70908 cri.go:89] found id: ""
	I0311 21:36:43.275214   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.275224   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:43.275232   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:43.275287   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:43.314481   70908 cri.go:89] found id: ""
	I0311 21:36:43.314507   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.314515   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:43.314521   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:43.314580   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:43.353287   70908 cri.go:89] found id: ""
	I0311 21:36:43.353317   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.353328   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:43.353336   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:43.353395   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:43.396112   70908 cri.go:89] found id: ""
	I0311 21:36:43.396138   70908 logs.go:276] 0 containers: []
	W0311 21:36:43.396150   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:43.396160   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:43.396175   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:43.456116   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:43.456143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:43.472992   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:43.473023   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:43.558281   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:43.558311   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:43.558327   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:43.641849   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:43.641885   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:45.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.147574   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:44.954864   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.450806   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:47.029669   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.529505   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:46.187444   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:46.202848   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:46.202911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:46.244843   70908 cri.go:89] found id: ""
	I0311 21:36:46.244872   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.244880   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:46.244886   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:46.244933   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:46.297789   70908 cri.go:89] found id: ""
	I0311 21:36:46.297820   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.297831   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:46.297838   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:46.297903   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:46.353104   70908 cri.go:89] found id: ""
	I0311 21:36:46.353127   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.353134   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:46.353140   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:46.353211   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:46.426767   70908 cri.go:89] found id: ""
	I0311 21:36:46.426792   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.426799   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:46.426804   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:46.426858   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:46.469850   70908 cri.go:89] found id: ""
	I0311 21:36:46.469881   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.469891   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:46.469899   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:46.469960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:46.510692   70908 cri.go:89] found id: ""
	I0311 21:36:46.510718   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.510726   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:46.510732   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:46.510787   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:46.554445   70908 cri.go:89] found id: ""
	I0311 21:36:46.554468   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.554475   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:46.554482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:46.554527   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:46.592417   70908 cri.go:89] found id: ""
	I0311 21:36:46.592448   70908 logs.go:276] 0 containers: []
	W0311 21:36:46.592458   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:46.592467   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:46.592480   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:46.607106   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:46.607146   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:46.691556   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:46.691575   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:46.691587   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:46.772468   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:46.772503   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:46.814478   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:46.814512   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.368451   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:49.383504   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:49.383573   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:49.427392   70908 cri.go:89] found id: ""
	I0311 21:36:49.427415   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.427426   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:49.427434   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:49.427493   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:49.469022   70908 cri.go:89] found id: ""
	I0311 21:36:49.469044   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.469052   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:49.469059   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:49.469106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:49.510755   70908 cri.go:89] found id: ""
	I0311 21:36:49.510781   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.510792   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:49.510800   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:49.510886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:49.556594   70908 cri.go:89] found id: ""
	I0311 21:36:49.556631   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.556642   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:49.556649   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:49.556710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:49.597035   70908 cri.go:89] found id: ""
	I0311 21:36:49.597059   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.597067   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:49.597072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:49.597138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:49.642947   70908 cri.go:89] found id: ""
	I0311 21:36:49.642975   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.642985   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:49.642993   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:49.643051   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:49.681401   70908 cri.go:89] found id: ""
	I0311 21:36:49.681423   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.681430   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:49.681435   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:49.681478   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:49.718498   70908 cri.go:89] found id: ""
	I0311 21:36:49.718529   70908 logs.go:276] 0 containers: []
	W0311 21:36:49.718539   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:49.718549   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:49.718563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:49.764483   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:49.764515   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:49.821261   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:49.821293   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:49.837110   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:49.837135   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:49.918507   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:49.918529   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:49.918541   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:49.648198   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.146837   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:49.450941   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:51.950760   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.030288   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.528831   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:52.500354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:52.516722   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:52.516811   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:52.563312   70908 cri.go:89] found id: ""
	I0311 21:36:52.563340   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.563354   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:52.563362   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:52.563421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:52.603545   70908 cri.go:89] found id: ""
	I0311 21:36:52.603572   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.603581   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:52.603588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:52.603657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:52.645624   70908 cri.go:89] found id: ""
	I0311 21:36:52.645648   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.645658   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:52.645665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:52.645722   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:52.693335   70908 cri.go:89] found id: ""
	I0311 21:36:52.693363   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.693373   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:52.693380   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:52.693437   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:52.740272   70908 cri.go:89] found id: ""
	I0311 21:36:52.740310   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.740331   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:52.740341   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:52.740398   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:52.786241   70908 cri.go:89] found id: ""
	I0311 21:36:52.786276   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.786285   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:52.786291   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:52.786355   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:52.825013   70908 cri.go:89] found id: ""
	I0311 21:36:52.825042   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.825053   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:52.825061   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:52.825117   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:52.862867   70908 cri.go:89] found id: ""
	I0311 21:36:52.862892   70908 logs.go:276] 0 containers: []
	W0311 21:36:52.862901   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:52.862908   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:52.862922   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:52.917005   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:52.917036   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:52.932086   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:52.932112   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:53.012379   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:53.012402   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:53.012413   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:53.096881   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:53.096913   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:55.640142   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:55.656664   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:55.656749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:55.697962   70908 cri.go:89] found id: ""
	I0311 21:36:55.697992   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.698000   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:55.698005   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:55.698059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:55.741888   70908 cri.go:89] found id: ""
	I0311 21:36:55.741910   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.741917   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:55.741921   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:55.741965   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:55.779352   70908 cri.go:89] found id: ""
	I0311 21:36:55.779372   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.779381   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:55.779386   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:55.779430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:55.819496   70908 cri.go:89] found id: ""
	I0311 21:36:55.819530   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.819541   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:55.819549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:55.819612   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:55.859384   70908 cri.go:89] found id: ""
	I0311 21:36:55.859412   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.859419   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:55.859424   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:55.859473   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:55.899415   70908 cri.go:89] found id: ""
	I0311 21:36:55.899438   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.899445   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:55.899450   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:55.899496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:55.938595   70908 cri.go:89] found id: ""
	I0311 21:36:55.938625   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.938637   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:55.938645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:55.938710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:55.980064   70908 cri.go:89] found id: ""
	I0311 21:36:55.980089   70908 logs.go:276] 0 containers: []
	W0311 21:36:55.980096   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:55.980103   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:55.980115   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:36:55.996222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:55.996297   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:36:54.147743   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.150270   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.648829   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:54.450767   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.949091   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:58.950443   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:56.529184   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:36:59.029323   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	W0311 21:36:56.081046   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:56.081074   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:56.081090   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:56.167748   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:56.167773   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:56.221118   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:56.221150   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:58.772403   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:36:58.789349   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:36:58.789421   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:36:58.829945   70908 cri.go:89] found id: ""
	I0311 21:36:58.829974   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.829985   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:36:58.829993   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:36:58.830059   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:36:58.877190   70908 cri.go:89] found id: ""
	I0311 21:36:58.877214   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.877224   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:36:58.877231   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:36:58.877295   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:36:58.920086   70908 cri.go:89] found id: ""
	I0311 21:36:58.920113   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.920122   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:36:58.920128   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:36:58.920189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:36:58.956864   70908 cri.go:89] found id: ""
	I0311 21:36:58.956890   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.956900   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:36:58.956907   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:36:58.956967   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:36:58.999363   70908 cri.go:89] found id: ""
	I0311 21:36:58.999390   70908 logs.go:276] 0 containers: []
	W0311 21:36:58.999400   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:36:58.999408   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:36:58.999469   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:36:59.041759   70908 cri.go:89] found id: ""
	I0311 21:36:59.041787   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.041797   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:36:59.041803   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:36:59.041850   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:36:59.084378   70908 cri.go:89] found id: ""
	I0311 21:36:59.084406   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.084417   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:36:59.084425   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:36:59.084479   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:36:59.124105   70908 cri.go:89] found id: ""
	I0311 21:36:59.124151   70908 logs.go:276] 0 containers: []
	W0311 21:36:59.124163   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:36:59.124173   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:36:59.124188   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:36:59.202060   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:36:59.202083   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:36:59.202098   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:36:59.284025   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:36:59.284060   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:36:59.327926   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:36:59.327951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:36:59.382505   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:36:59.382533   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:01.147260   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.149020   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.450230   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.949834   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.529173   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:03.532427   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:01.900084   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:01.914495   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:01.914552   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:01.956887   70908 cri.go:89] found id: ""
	I0311 21:37:01.956912   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.956922   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:01.956929   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:01.956986   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:01.995358   70908 cri.go:89] found id: ""
	I0311 21:37:01.995385   70908 logs.go:276] 0 containers: []
	W0311 21:37:01.995394   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:01.995399   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:01.995448   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:02.033949   70908 cri.go:89] found id: ""
	I0311 21:37:02.033974   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.033984   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:02.033991   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:02.034049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:02.074348   70908 cri.go:89] found id: ""
	I0311 21:37:02.074372   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.074382   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:02.074390   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:02.074449   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:02.112456   70908 cri.go:89] found id: ""
	I0311 21:37:02.112479   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.112486   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:02.112491   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:02.112554   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:02.155102   70908 cri.go:89] found id: ""
	I0311 21:37:02.155130   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.155138   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:02.155149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:02.155205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:02.191359   70908 cri.go:89] found id: ""
	I0311 21:37:02.191386   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.191393   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:02.191399   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:02.191450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:02.236178   70908 cri.go:89] found id: ""
	I0311 21:37:02.236203   70908 logs.go:276] 0 containers: []
	W0311 21:37:02.236211   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:02.236220   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:02.236231   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:02.285794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:02.285818   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:02.342348   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:02.342387   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:02.357230   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:02.357257   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:02.431044   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:02.431064   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:02.431076   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.019473   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:05.035841   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:05.035901   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:05.082013   70908 cri.go:89] found id: ""
	I0311 21:37:05.082034   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.082041   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:05.082046   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:05.082091   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:05.126236   70908 cri.go:89] found id: ""
	I0311 21:37:05.126257   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.126265   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:05.126270   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:05.126311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:05.170573   70908 cri.go:89] found id: ""
	I0311 21:37:05.170601   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.170608   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:05.170614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:05.170658   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:05.213921   70908 cri.go:89] found id: ""
	I0311 21:37:05.213948   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.213958   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:05.213965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:05.214025   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:05.261178   70908 cri.go:89] found id: ""
	I0311 21:37:05.261206   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.261213   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:05.261221   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:05.261273   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:05.306007   70908 cri.go:89] found id: ""
	I0311 21:37:05.306037   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.306045   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:05.306051   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:05.306106   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:05.346653   70908 cri.go:89] found id: ""
	I0311 21:37:05.346679   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.346688   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:05.346694   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:05.346752   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:05.384587   70908 cri.go:89] found id: ""
	I0311 21:37:05.384626   70908 logs.go:276] 0 containers: []
	W0311 21:37:05.384637   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:05.384648   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:05.384664   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:05.440676   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:05.440709   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:05.456989   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:05.457018   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:05.553900   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:05.553932   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:05.553947   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:05.633270   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:05.633300   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:05.647077   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.146975   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.449502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.450008   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:06.028642   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.529826   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:08.181935   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:08.198179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:08.198251   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:08.236484   70908 cri.go:89] found id: ""
	I0311 21:37:08.236506   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.236516   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:08.236524   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:08.236578   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:08.277701   70908 cri.go:89] found id: ""
	I0311 21:37:08.277731   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.277739   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:08.277745   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:08.277804   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:08.319559   70908 cri.go:89] found id: ""
	I0311 21:37:08.319585   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.319596   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:08.319604   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:08.319666   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:08.359752   70908 cri.go:89] found id: ""
	I0311 21:37:08.359777   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.359785   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:08.359791   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:08.359849   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:08.397432   70908 cri.go:89] found id: ""
	I0311 21:37:08.397453   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.397460   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:08.397465   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:08.397511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:08.438708   70908 cri.go:89] found id: ""
	I0311 21:37:08.438732   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.438742   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:08.438749   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:08.438807   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:08.479511   70908 cri.go:89] found id: ""
	I0311 21:37:08.479533   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.479560   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:08.479566   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:08.479620   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:08.521634   70908 cri.go:89] found id: ""
	I0311 21:37:08.521659   70908 logs.go:276] 0 containers: []
	W0311 21:37:08.521670   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:08.521680   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:08.521693   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:08.577033   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:08.577065   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:08.592006   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:08.592030   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:08.680862   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:08.680903   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:08.680919   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:08.764991   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:08.765037   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:10.147819   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.648352   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:10.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:12.949571   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.028245   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:13.028689   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:15.034232   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:11.313168   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:11.326808   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:11.326876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:11.364223   70908 cri.go:89] found id: ""
	I0311 21:37:11.364246   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.364254   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:11.364259   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:11.364311   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:11.401361   70908 cri.go:89] found id: ""
	I0311 21:37:11.401391   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.401402   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:11.401409   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:11.401459   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:11.441927   70908 cri.go:89] found id: ""
	I0311 21:37:11.441950   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.441957   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:11.441962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:11.442015   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:11.480804   70908 cri.go:89] found id: ""
	I0311 21:37:11.480836   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.480847   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:11.480855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:11.480913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:11.520135   70908 cri.go:89] found id: ""
	I0311 21:37:11.520166   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.520177   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:11.520193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:11.520255   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:11.559214   70908 cri.go:89] found id: ""
	I0311 21:37:11.559244   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.559255   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:11.559263   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:11.559322   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:11.597346   70908 cri.go:89] found id: ""
	I0311 21:37:11.597374   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.597383   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:11.597391   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:11.597452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:11.646095   70908 cri.go:89] found id: ""
	I0311 21:37:11.646118   70908 logs.go:276] 0 containers: []
	W0311 21:37:11.646127   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:11.646137   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:11.646167   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:11.691813   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:11.691844   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:11.745270   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:11.745303   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:11.761107   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:11.761131   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:11.841033   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:11.841059   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:11.841074   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:14.431709   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:14.447064   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:14.447131   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:14.493094   70908 cri.go:89] found id: ""
	I0311 21:37:14.493132   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.493140   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:14.493146   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:14.493195   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:14.537391   70908 cri.go:89] found id: ""
	I0311 21:37:14.537415   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.537423   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:14.537428   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:14.537487   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:14.576284   70908 cri.go:89] found id: ""
	I0311 21:37:14.576306   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.576313   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:14.576319   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:14.576375   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:14.627057   70908 cri.go:89] found id: ""
	I0311 21:37:14.627086   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.627097   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:14.627105   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:14.627163   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:14.669204   70908 cri.go:89] found id: ""
	I0311 21:37:14.669226   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.669233   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:14.669238   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:14.669293   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:14.708787   70908 cri.go:89] found id: ""
	I0311 21:37:14.708812   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.708820   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:14.708826   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:14.708892   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:14.749795   70908 cri.go:89] found id: ""
	I0311 21:37:14.749819   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.749828   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:14.749835   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:14.749893   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:14.794871   70908 cri.go:89] found id: ""
	I0311 21:37:14.794900   70908 logs.go:276] 0 containers: []
	W0311 21:37:14.794911   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:14.794922   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:14.794936   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:14.850022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:14.850050   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:14.866589   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:14.866618   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:14.968887   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:14.968906   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:14.968921   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:15.047376   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:15.047404   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:14.648528   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:16.649275   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:18.649842   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:14.951387   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.451239   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.529411   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:20.030012   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:17.599834   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:17.613610   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:17.613665   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:17.655340   70908 cri.go:89] found id: ""
	I0311 21:37:17.655361   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.655369   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:17.655374   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:17.655416   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:17.695071   70908 cri.go:89] found id: ""
	I0311 21:37:17.695103   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.695114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:17.695121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:17.695178   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:17.731914   70908 cri.go:89] found id: ""
	I0311 21:37:17.731938   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.731946   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:17.731952   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:17.732012   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:17.768198   70908 cri.go:89] found id: ""
	I0311 21:37:17.768224   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.768236   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:17.768242   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:17.768301   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:17.802881   70908 cri.go:89] found id: ""
	I0311 21:37:17.802909   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.802920   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:17.802928   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:17.802983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:17.841660   70908 cri.go:89] found id: ""
	I0311 21:37:17.841684   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.841692   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:17.841698   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:17.841749   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:17.880154   70908 cri.go:89] found id: ""
	I0311 21:37:17.880183   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.880196   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:17.880205   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:17.880260   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:17.919797   70908 cri.go:89] found id: ""
	I0311 21:37:17.919822   70908 logs.go:276] 0 containers: []
	W0311 21:37:17.919829   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:17.919837   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:17.919847   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:17.976607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:17.976636   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:17.993313   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:17.993339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:18.069928   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:18.069956   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:18.069973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:18.152257   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:18.152285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:20.706553   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:20.721148   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:20.721214   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:20.762913   70908 cri.go:89] found id: ""
	I0311 21:37:20.762935   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.762943   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:20.762952   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:20.762997   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:20.811120   70908 cri.go:89] found id: ""
	I0311 21:37:20.811147   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.811158   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:20.811165   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:20.811225   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:20.848987   70908 cri.go:89] found id: ""
	I0311 21:37:20.849015   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.849026   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:20.849033   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:20.849098   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:20.896201   70908 cri.go:89] found id: ""
	I0311 21:37:20.896226   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.896233   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:20.896240   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:20.896299   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:20.936570   70908 cri.go:89] found id: ""
	I0311 21:37:20.936595   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.936603   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:20.936608   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:20.936657   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:20.977535   70908 cri.go:89] found id: ""
	I0311 21:37:20.977565   70908 logs.go:276] 0 containers: []
	W0311 21:37:20.977576   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:20.977584   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:20.977647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:21.015370   70908 cri.go:89] found id: ""
	I0311 21:37:21.015395   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.015405   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:21.015413   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:21.015472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:21.146868   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:23.147272   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:19.950972   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.450298   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:22.528109   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.530216   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:21.056190   70908 cri.go:89] found id: ""
	I0311 21:37:21.056214   70908 logs.go:276] 0 containers: []
	W0311 21:37:21.056225   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:21.056235   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:21.056255   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:21.112022   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:21.112051   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:21.128841   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:21.128872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:21.209690   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:21.209716   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:21.209732   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:21.291064   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:21.291099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:23.844334   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:23.860000   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:23.860061   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:23.899777   70908 cri.go:89] found id: ""
	I0311 21:37:23.899805   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.899814   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:23.899820   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:23.899879   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:23.941510   70908 cri.go:89] found id: ""
	I0311 21:37:23.941537   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.941547   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:23.941555   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:23.941627   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:23.980564   70908 cri.go:89] found id: ""
	I0311 21:37:23.980592   70908 logs.go:276] 0 containers: []
	W0311 21:37:23.980602   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:23.980614   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:23.980676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:24.020310   70908 cri.go:89] found id: ""
	I0311 21:37:24.020337   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.020348   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:24.020354   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:24.020410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:24.059320   70908 cri.go:89] found id: ""
	I0311 21:37:24.059349   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.059359   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:24.059367   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:24.059424   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:24.096625   70908 cri.go:89] found id: ""
	I0311 21:37:24.096652   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.096660   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:24.096666   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:24.096723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:24.137068   70908 cri.go:89] found id: ""
	I0311 21:37:24.137100   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.137112   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:24.137121   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:24.137182   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:24.181298   70908 cri.go:89] found id: ""
	I0311 21:37:24.181325   70908 logs.go:276] 0 containers: []
	W0311 21:37:24.181336   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:24.181348   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:24.181364   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:24.265423   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:24.265454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:24.318088   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:24.318113   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:24.374402   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:24.374430   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:24.388934   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:24.388962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:24.475842   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:25.647164   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.650157   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:24.948984   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.949444   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:28.950697   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:27.030240   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:29.030848   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:26.976017   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:26.991533   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:26.991602   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:27.034750   70908 cri.go:89] found id: ""
	I0311 21:37:27.034769   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.034776   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:27.034781   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:27.034837   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:27.073275   70908 cri.go:89] found id: ""
	I0311 21:37:27.073301   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.073309   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:27.073317   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:27.073363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:27.113396   70908 cri.go:89] found id: ""
	I0311 21:37:27.113418   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.113425   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:27.113431   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:27.113482   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:27.157442   70908 cri.go:89] found id: ""
	I0311 21:37:27.157465   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.157475   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:27.157482   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:27.157534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:27.197277   70908 cri.go:89] found id: ""
	I0311 21:37:27.197302   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.197309   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:27.197315   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:27.197363   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:27.237967   70908 cri.go:89] found id: ""
	I0311 21:37:27.237991   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.237999   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:27.238005   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:27.238077   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:27.280434   70908 cri.go:89] found id: ""
	I0311 21:37:27.280459   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.280467   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:27.280472   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:27.280535   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:27.334940   70908 cri.go:89] found id: ""
	I0311 21:37:27.334970   70908 logs.go:276] 0 containers: []
	W0311 21:37:27.334982   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:27.334992   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:27.335010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:27.402535   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:27.402570   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:27.416758   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:27.416787   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:27.492762   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:27.492786   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:27.492803   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:27.576989   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:27.577032   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.124039   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:30.138419   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:30.138483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:30.180900   70908 cri.go:89] found id: ""
	I0311 21:37:30.180926   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.180936   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:30.180944   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:30.180998   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:30.222886   70908 cri.go:89] found id: ""
	I0311 21:37:30.222913   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.222921   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:30.222926   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:30.222976   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:30.264332   70908 cri.go:89] found id: ""
	I0311 21:37:30.264357   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.264367   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:30.264376   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:30.264436   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:30.307084   70908 cri.go:89] found id: ""
	I0311 21:37:30.307112   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.307123   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:30.307130   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:30.307188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:30.345954   70908 cri.go:89] found id: ""
	I0311 21:37:30.345979   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.345990   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:30.345997   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:30.346057   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:30.389408   70908 cri.go:89] found id: ""
	I0311 21:37:30.389439   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.389450   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:30.389457   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:30.389517   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:30.438380   70908 cri.go:89] found id: ""
	I0311 21:37:30.438410   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.438420   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:30.438427   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:30.438489   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:30.479860   70908 cri.go:89] found id: ""
	I0311 21:37:30.479884   70908 logs.go:276] 0 containers: []
	W0311 21:37:30.479895   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:30.479906   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:30.479920   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:30.535831   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:30.535857   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:30.552702   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:30.552725   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:30.633417   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:30.633439   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:30.633454   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:30.723106   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:30.723143   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:30.147993   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:32.152839   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.450942   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.949947   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:31.528469   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.529721   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:33.270654   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:33.296640   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:33.296710   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:33.366053   70908 cri.go:89] found id: ""
	I0311 21:37:33.366082   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.366093   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:33.366101   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:33.366161   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:33.421455   70908 cri.go:89] found id: ""
	I0311 21:37:33.421488   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.421501   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:33.421509   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:33.421583   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:33.464555   70908 cri.go:89] found id: ""
	I0311 21:37:33.464579   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.464586   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:33.464592   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:33.464647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:33.507044   70908 cri.go:89] found id: ""
	I0311 21:37:33.507086   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.507100   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:33.507110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:33.507175   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:33.561446   70908 cri.go:89] found id: ""
	I0311 21:37:33.561518   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.561532   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:33.561540   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:33.561601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:33.604496   70908 cri.go:89] found id: ""
	I0311 21:37:33.604519   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.604528   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:33.604534   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:33.604591   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:33.645754   70908 cri.go:89] found id: ""
	I0311 21:37:33.645781   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.645791   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:33.645797   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:33.645869   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:33.690041   70908 cri.go:89] found id: ""
	I0311 21:37:33.690071   70908 logs.go:276] 0 containers: []
	W0311 21:37:33.690082   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:33.690092   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:33.690108   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:33.765708   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:33.765737   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:33.765752   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:33.848869   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:33.848906   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:33.900191   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:33.900223   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:33.957101   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:33.957138   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:34.646831   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.647640   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.449429   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.948831   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.028141   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:38.028588   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.028676   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:36.474442   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:36.490159   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:36.490231   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:36.537784   70908 cri.go:89] found id: ""
	I0311 21:37:36.537812   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.537822   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:36.537829   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:36.537885   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:36.581192   70908 cri.go:89] found id: ""
	I0311 21:37:36.581219   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.581230   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:36.581237   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:36.581297   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:36.620448   70908 cri.go:89] found id: ""
	I0311 21:37:36.620480   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.620492   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:36.620501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:36.620566   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:36.662135   70908 cri.go:89] found id: ""
	I0311 21:37:36.662182   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.662193   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:36.662203   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:36.662268   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:36.708138   70908 cri.go:89] found id: ""
	I0311 21:37:36.708178   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.708188   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:36.708198   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:36.708267   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:36.749668   70908 cri.go:89] found id: ""
	I0311 21:37:36.749697   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.749708   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:36.749717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:36.749783   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:36.788455   70908 cri.go:89] found id: ""
	I0311 21:37:36.788476   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.788483   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:36.788488   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:36.788534   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:36.830216   70908 cri.go:89] found id: ""
	I0311 21:37:36.830244   70908 logs.go:276] 0 containers: []
	W0311 21:37:36.830257   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:36.830267   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:36.830285   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:36.915306   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:36.915336   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:36.958861   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:36.958892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:37.014463   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:37.014489   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:37.029979   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:37.030010   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:37.106840   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:39.607929   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:39.626247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:39.626307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:39.667409   70908 cri.go:89] found id: ""
	I0311 21:37:39.667436   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.667446   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:39.667454   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:39.667509   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:39.714167   70908 cri.go:89] found id: ""
	I0311 21:37:39.714198   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.714210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:39.714217   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:39.714275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:39.754759   70908 cri.go:89] found id: ""
	I0311 21:37:39.754787   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.754798   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:39.754805   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:39.754865   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:39.794999   70908 cri.go:89] found id: ""
	I0311 21:37:39.795028   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.795038   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:39.795045   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:39.795108   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:39.836284   70908 cri.go:89] found id: ""
	I0311 21:37:39.836310   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.836321   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:39.836328   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:39.836386   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:39.876487   70908 cri.go:89] found id: ""
	I0311 21:37:39.876518   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.876530   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:39.876539   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:39.876601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:39.918750   70908 cri.go:89] found id: ""
	I0311 21:37:39.918785   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.918796   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:39.918813   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:39.918871   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:39.958486   70908 cri.go:89] found id: ""
	I0311 21:37:39.958517   70908 logs.go:276] 0 containers: []
	W0311 21:37:39.958529   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:39.958537   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:39.958550   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:39.973899   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:39.973925   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:40.055954   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:40.055980   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:40.055995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:40.144801   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:40.144826   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:40.189692   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:40.189722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:39.148581   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:41.647869   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:43.648550   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:40.949502   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.951277   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.528844   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:44.529317   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:42.748909   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:42.763794   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:42.763877   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:42.801470   70908 cri.go:89] found id: ""
	I0311 21:37:42.801493   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.801500   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:42.801506   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:42.801561   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:42.846267   70908 cri.go:89] found id: ""
	I0311 21:37:42.846294   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.846301   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:42.846307   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:42.846357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:42.890257   70908 cri.go:89] found id: ""
	I0311 21:37:42.890283   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.890294   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:42.890301   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:42.890357   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:42.933605   70908 cri.go:89] found id: ""
	I0311 21:37:42.933628   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.933636   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:42.933643   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:42.933699   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:42.979020   70908 cri.go:89] found id: ""
	I0311 21:37:42.979043   70908 logs.go:276] 0 containers: []
	W0311 21:37:42.979052   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:42.979059   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:42.979122   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:43.021695   70908 cri.go:89] found id: ""
	I0311 21:37:43.021724   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.021734   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:43.021741   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:43.021801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:43.064356   70908 cri.go:89] found id: ""
	I0311 21:37:43.064398   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.064406   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:43.064412   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:43.064457   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:43.101878   70908 cri.go:89] found id: ""
	I0311 21:37:43.101901   70908 logs.go:276] 0 containers: []
	W0311 21:37:43.101909   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:43.101917   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:43.101930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:43.185836   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:43.185861   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:43.185874   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:43.268879   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:43.268912   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:43.319582   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:43.319614   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:43.374996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:43.375022   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:45.890408   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:45.905973   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:45.906041   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:45.951994   70908 cri.go:89] found id: ""
	I0311 21:37:45.952025   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.952040   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:45.952049   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:45.952112   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:45.992913   70908 cri.go:89] found id: ""
	I0311 21:37:45.992953   70908 logs.go:276] 0 containers: []
	W0311 21:37:45.992964   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:45.992971   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:45.993034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:46.036306   70908 cri.go:89] found id: ""
	I0311 21:37:46.036334   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.036345   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:46.036353   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:46.036410   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:46.147754   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:48.647534   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:45.450180   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:47.949568   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.532244   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.028905   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:46.077532   70908 cri.go:89] found id: ""
	I0311 21:37:46.077564   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.077576   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:46.077583   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:46.077633   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:46.115953   70908 cri.go:89] found id: ""
	I0311 21:37:46.115976   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.115983   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:46.115990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:46.116072   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:46.155665   70908 cri.go:89] found id: ""
	I0311 21:37:46.155699   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.155709   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:46.155717   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:46.155775   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:46.197650   70908 cri.go:89] found id: ""
	I0311 21:37:46.197677   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.197696   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:46.197705   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:46.197766   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:46.243006   70908 cri.go:89] found id: ""
	I0311 21:37:46.243030   70908 logs.go:276] 0 containers: []
	W0311 21:37:46.243037   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:46.243045   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:46.243058   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:46.294668   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:46.294696   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:46.308700   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:46.308721   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:46.387188   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:46.387207   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:46.387219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:46.480390   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:46.480423   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.027202   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:49.042292   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:49.042361   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:49.081547   70908 cri.go:89] found id: ""
	I0311 21:37:49.081568   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.081579   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:49.081585   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:49.081632   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:49.127438   70908 cri.go:89] found id: ""
	I0311 21:37:49.127467   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.127477   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:49.127485   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:49.127545   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:49.173992   70908 cri.go:89] found id: ""
	I0311 21:37:49.174024   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.174033   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:49.174042   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:49.174114   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:49.217087   70908 cri.go:89] found id: ""
	I0311 21:37:49.217120   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.217130   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:49.217138   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:49.217198   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:49.255929   70908 cri.go:89] found id: ""
	I0311 21:37:49.255955   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.255970   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:49.255978   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:49.256037   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:49.296373   70908 cri.go:89] found id: ""
	I0311 21:37:49.296399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.296409   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:49.296417   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:49.296474   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:49.335063   70908 cri.go:89] found id: ""
	I0311 21:37:49.335092   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.335103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:49.335110   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:49.335176   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:49.378374   70908 cri.go:89] found id: ""
	I0311 21:37:49.378399   70908 logs.go:276] 0 containers: []
	W0311 21:37:49.378406   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:49.378414   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:49.378427   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:49.422193   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:49.422220   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:49.474861   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:49.474893   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:49.490193   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:49.490219   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:49.571857   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:49.571880   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:49.571895   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:51.149814   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.648033   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:49.949603   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.949943   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:53.951963   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:51.531753   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:54.028723   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:52.168934   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:52.183086   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:52.183154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:52.221632   70908 cri.go:89] found id: ""
	I0311 21:37:52.221664   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.221675   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:52.221682   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:52.221743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:52.261550   70908 cri.go:89] found id: ""
	I0311 21:37:52.261575   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.261582   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:52.261588   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:52.261638   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:52.302879   70908 cri.go:89] found id: ""
	I0311 21:37:52.302910   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.302920   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:52.302927   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:52.302987   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:52.346462   70908 cri.go:89] found id: ""
	I0311 21:37:52.346485   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.346494   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:52.346499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:52.346551   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:52.387949   70908 cri.go:89] found id: ""
	I0311 21:37:52.387977   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.387988   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:52.387995   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:52.388052   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:52.428527   70908 cri.go:89] found id: ""
	I0311 21:37:52.428564   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.428574   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:52.428582   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:52.428649   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:52.469516   70908 cri.go:89] found id: ""
	I0311 21:37:52.469548   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.469558   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:52.469565   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:52.469616   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:52.508371   70908 cri.go:89] found id: ""
	I0311 21:37:52.508407   70908 logs.go:276] 0 containers: []
	W0311 21:37:52.508417   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:52.508429   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:52.508444   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:52.587309   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:52.587346   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:52.587361   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:52.666419   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:52.666449   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:52.713150   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:52.713184   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:52.768011   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:52.768041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.284835   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:55.298742   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:55.298799   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:55.340215   70908 cri.go:89] found id: ""
	I0311 21:37:55.340240   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.340251   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:55.340257   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:55.340321   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:55.377930   70908 cri.go:89] found id: ""
	I0311 21:37:55.377956   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.377967   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:55.377974   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:55.378039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:55.418786   70908 cri.go:89] found id: ""
	I0311 21:37:55.418814   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.418822   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:55.418827   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:55.418883   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:55.461566   70908 cri.go:89] found id: ""
	I0311 21:37:55.461586   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.461593   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:55.461601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:55.461655   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:55.502917   70908 cri.go:89] found id: ""
	I0311 21:37:55.502945   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.502955   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:55.502962   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:55.503022   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:55.551417   70908 cri.go:89] found id: ""
	I0311 21:37:55.551441   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.551454   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:55.551462   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:55.551514   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:55.596060   70908 cri.go:89] found id: ""
	I0311 21:37:55.596092   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.596103   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:55.596111   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:55.596172   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:55.635495   70908 cri.go:89] found id: ""
	I0311 21:37:55.635523   70908 logs.go:276] 0 containers: []
	W0311 21:37:55.635535   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:55.635547   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:55.635564   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:55.691705   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:55.691735   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:37:55.707696   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:55.707718   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:55.780432   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:55.780452   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:55.780465   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:55.866033   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:55.866067   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:55.648873   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.147404   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.452135   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.951150   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:56.528533   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.529769   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:37:58.437299   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:37:58.453058   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:37:58.453125   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:37:58.493317   70908 cri.go:89] found id: ""
	I0311 21:37:58.493339   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.493347   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:37:58.493353   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:37:58.493408   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:37:58.543533   70908 cri.go:89] found id: ""
	I0311 21:37:58.543556   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.543567   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:37:58.543578   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:37:58.543634   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:37:58.585255   70908 cri.go:89] found id: ""
	I0311 21:37:58.585282   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.585292   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:37:58.585300   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:37:58.585359   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:37:58.622393   70908 cri.go:89] found id: ""
	I0311 21:37:58.622421   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.622428   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:37:58.622434   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:37:58.622501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:37:58.661939   70908 cri.go:89] found id: ""
	I0311 21:37:58.661963   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.661971   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:37:58.661977   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:37:58.662034   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:37:58.703628   70908 cri.go:89] found id: ""
	I0311 21:37:58.703663   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.703674   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:37:58.703682   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:37:58.703743   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:37:58.742553   70908 cri.go:89] found id: ""
	I0311 21:37:58.742583   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.742594   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:37:58.742601   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:37:58.742662   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:37:58.785016   70908 cri.go:89] found id: ""
	I0311 21:37:58.785040   70908 logs.go:276] 0 containers: []
	W0311 21:37:58.785047   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:37:58.785055   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:37:58.785071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:37:58.857757   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:37:58.857773   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:37:58.857786   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:37:58.946120   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:37:58.946148   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:37:58.996288   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:37:58.996328   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:37:59.055371   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:37:59.055407   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:00.651621   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.149663   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:00.951776   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.451012   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.028303   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:03.028600   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.032276   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:01.571092   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:01.591149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:01.591238   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:01.629156   70908 cri.go:89] found id: ""
	I0311 21:38:01.629184   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.629196   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:01.629203   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:01.629261   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:01.673656   70908 cri.go:89] found id: ""
	I0311 21:38:01.673680   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.673687   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:01.673692   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:01.673739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:01.713361   70908 cri.go:89] found id: ""
	I0311 21:38:01.713389   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.713397   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:01.713403   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:01.713450   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:01.757256   70908 cri.go:89] found id: ""
	I0311 21:38:01.757286   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.757298   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:01.757305   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:01.757362   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:01.797538   70908 cri.go:89] found id: ""
	I0311 21:38:01.797565   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.797573   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:01.797580   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:01.797635   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:01.838664   70908 cri.go:89] found id: ""
	I0311 21:38:01.838692   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.838701   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:01.838707   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:01.838754   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:01.893638   70908 cri.go:89] found id: ""
	I0311 21:38:01.893668   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.893679   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:01.893686   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:01.893747   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:01.935547   70908 cri.go:89] found id: ""
	I0311 21:38:01.935569   70908 logs.go:276] 0 containers: []
	W0311 21:38:01.935577   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:01.935585   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:01.935596   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:01.989964   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:01.989988   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:02.004949   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:02.004973   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:02.082006   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:02.082024   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:02.082041   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:02.171040   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:02.171072   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:04.724699   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:04.741445   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:04.741512   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:04.783924   70908 cri.go:89] found id: ""
	I0311 21:38:04.783951   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.783962   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:04.783969   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:04.784028   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:04.825806   70908 cri.go:89] found id: ""
	I0311 21:38:04.825835   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.825845   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:04.825852   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:04.825913   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:04.864070   70908 cri.go:89] found id: ""
	I0311 21:38:04.864106   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.864118   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:04.864126   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:04.864181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:04.901735   70908 cri.go:89] found id: ""
	I0311 21:38:04.901759   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.901769   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:04.901777   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:04.901832   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:04.941473   70908 cri.go:89] found id: ""
	I0311 21:38:04.941496   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.941505   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:04.941513   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:04.941569   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:04.993132   70908 cri.go:89] found id: ""
	I0311 21:38:04.993162   70908 logs.go:276] 0 containers: []
	W0311 21:38:04.993170   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:04.993178   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:04.993237   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:05.037925   70908 cri.go:89] found id: ""
	I0311 21:38:05.037950   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.037960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:05.037967   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:05.038026   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:05.080726   70908 cri.go:89] found id: ""
	I0311 21:38:05.080773   70908 logs.go:276] 0 containers: []
	W0311 21:38:05.080784   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:05.080794   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:05.080806   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:05.138205   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:05.138233   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:05.155048   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:05.155071   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:05.233067   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:05.233086   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:05.233099   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:05.317897   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:05.317928   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:05.646661   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.647686   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:05.949900   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.950261   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.528049   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:09.530724   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:07.863484   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:07.877342   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:07.877411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:07.916352   70908 cri.go:89] found id: ""
	I0311 21:38:07.916374   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.916383   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:07.916391   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:07.916454   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:07.954833   70908 cri.go:89] found id: ""
	I0311 21:38:07.954854   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.954863   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:07.954870   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:07.954926   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:07.993124   70908 cri.go:89] found id: ""
	I0311 21:38:07.993152   70908 logs.go:276] 0 containers: []
	W0311 21:38:07.993161   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:07.993168   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:07.993232   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:08.039081   70908 cri.go:89] found id: ""
	I0311 21:38:08.039108   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.039118   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:08.039125   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:08.039191   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:08.084627   70908 cri.go:89] found id: ""
	I0311 21:38:08.084650   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.084658   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:08.084665   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:08.084712   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:08.125986   70908 cri.go:89] found id: ""
	I0311 21:38:08.126015   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.126026   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:08.126034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:08.126080   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:08.167149   70908 cri.go:89] found id: ""
	I0311 21:38:08.167176   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.167188   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:08.167193   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:08.167252   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:08.204988   70908 cri.go:89] found id: ""
	I0311 21:38:08.205012   70908 logs.go:276] 0 containers: []
	W0311 21:38:08.205020   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:08.205028   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:08.205043   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:08.295226   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:08.295268   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:08.357789   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:08.357820   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:08.434091   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:08.434132   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:08.455208   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:08.455240   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:08.529620   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.030060   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:09.648047   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.649628   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:13.652370   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:10.450139   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:12.949551   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.531354   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.029703   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:11.044303   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:11.046353   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:11.088067   70908 cri.go:89] found id: ""
	I0311 21:38:11.088099   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.088110   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:11.088117   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:11.088177   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:11.131077   70908 cri.go:89] found id: ""
	I0311 21:38:11.131104   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.131114   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:11.131121   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:11.131181   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:11.172409   70908 cri.go:89] found id: ""
	I0311 21:38:11.172431   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.172439   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:11.172444   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:11.172496   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:11.216775   70908 cri.go:89] found id: ""
	I0311 21:38:11.216817   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.216825   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:11.216830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:11.216886   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:11.255105   70908 cri.go:89] found id: ""
	I0311 21:38:11.255129   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.255137   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:11.255142   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:11.255205   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:11.292397   70908 cri.go:89] found id: ""
	I0311 21:38:11.292429   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.292440   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:11.292448   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:11.292518   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:11.330376   70908 cri.go:89] found id: ""
	I0311 21:38:11.330397   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.330408   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:11.330415   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:11.330476   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:11.367699   70908 cri.go:89] found id: ""
	I0311 21:38:11.367727   70908 logs.go:276] 0 containers: []
	W0311 21:38:11.367737   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:11.367748   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:11.367763   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:11.421847   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:11.421876   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:11.437570   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:11.437593   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:11.522084   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:11.522108   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:11.522123   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:11.606181   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:11.606228   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.153952   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:14.175726   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:14.175798   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:14.221752   70908 cri.go:89] found id: ""
	I0311 21:38:14.221784   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.221798   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:14.221807   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:14.221895   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:14.286690   70908 cri.go:89] found id: ""
	I0311 21:38:14.286720   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.286740   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:14.286757   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:14.286824   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:14.343764   70908 cri.go:89] found id: ""
	I0311 21:38:14.343790   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.343799   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:14.343806   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:14.343876   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:14.381198   70908 cri.go:89] found id: ""
	I0311 21:38:14.381220   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.381230   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:14.381237   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:14.381307   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:14.421578   70908 cri.go:89] found id: ""
	I0311 21:38:14.421603   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.421613   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:14.421620   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:14.421678   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:14.462945   70908 cri.go:89] found id: ""
	I0311 21:38:14.462972   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.462982   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:14.462990   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:14.463049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:14.503503   70908 cri.go:89] found id: ""
	I0311 21:38:14.503532   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.503543   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:14.503550   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:14.503610   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:14.543987   70908 cri.go:89] found id: ""
	I0311 21:38:14.544021   70908 logs.go:276] 0 containers: []
	W0311 21:38:14.544034   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:14.544045   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:14.544062   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:14.624781   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:14.624804   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:14.624821   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:14.707130   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:14.707161   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:14.750815   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:14.750848   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:14.806855   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:14.806882   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:16.149516   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.646716   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:14.949827   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.953660   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:16.031935   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:18.529085   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:17.325267   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:17.340421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:17.340483   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:17.382808   70908 cri.go:89] found id: ""
	I0311 21:38:17.382831   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.382841   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:17.382849   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:17.382906   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:17.424838   70908 cri.go:89] found id: ""
	I0311 21:38:17.424865   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.424875   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:17.424883   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:17.424940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:17.466298   70908 cri.go:89] found id: ""
	I0311 21:38:17.466320   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.466327   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:17.466333   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:17.466397   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:17.506648   70908 cri.go:89] found id: ""
	I0311 21:38:17.506678   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.506685   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:17.506691   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:17.506739   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:17.544019   70908 cri.go:89] found id: ""
	I0311 21:38:17.544048   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.544057   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:17.544067   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:17.544154   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:17.583691   70908 cri.go:89] found id: ""
	I0311 21:38:17.583710   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.583717   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:17.583723   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:17.583768   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:17.624432   70908 cri.go:89] found id: ""
	I0311 21:38:17.624453   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.624460   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:17.624466   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:17.624516   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:17.663253   70908 cri.go:89] found id: ""
	I0311 21:38:17.663294   70908 logs.go:276] 0 containers: []
	W0311 21:38:17.663312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:17.663322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:17.663339   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:17.749928   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:17.749962   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:17.792817   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:17.792853   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:17.847391   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:17.847419   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:17.862813   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:17.862835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:17.935307   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.435995   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:20.452441   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:20.452510   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:20.491960   70908 cri.go:89] found id: ""
	I0311 21:38:20.491985   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.491992   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:20.491998   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:20.492045   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:20.531679   70908 cri.go:89] found id: ""
	I0311 21:38:20.531700   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.531707   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:20.531712   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:20.531764   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:20.571666   70908 cri.go:89] found id: ""
	I0311 21:38:20.571687   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.571694   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:20.571699   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:20.571762   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:20.611165   70908 cri.go:89] found id: ""
	I0311 21:38:20.611187   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.611194   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:20.611199   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:20.611248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:20.648680   70908 cri.go:89] found id: ""
	I0311 21:38:20.648709   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.648720   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:20.648728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:20.648801   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:20.690177   70908 cri.go:89] found id: ""
	I0311 21:38:20.690204   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.690215   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:20.690222   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:20.690298   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:20.728918   70908 cri.go:89] found id: ""
	I0311 21:38:20.728949   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.728960   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:20.728968   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:20.729039   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:20.773559   70908 cri.go:89] found id: ""
	I0311 21:38:20.773586   70908 logs.go:276] 0 containers: []
	W0311 21:38:20.773596   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:20.773607   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:20.773623   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:20.788709   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:20.788750   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:20.869832   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:20.869856   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:20.869868   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:20.963515   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:20.963544   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:21.007029   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:21.007055   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:21.147703   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.660410   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:19.449416   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:21.451194   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.950401   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:20.529497   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:22.529947   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:25.030431   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:23.566134   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:23.583855   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:23.583911   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:23.623605   70908 cri.go:89] found id: ""
	I0311 21:38:23.623633   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.623656   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:23.623664   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:23.623719   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:23.663058   70908 cri.go:89] found id: ""
	I0311 21:38:23.663081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.663091   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:23.663098   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:23.663157   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:23.701930   70908 cri.go:89] found id: ""
	I0311 21:38:23.701963   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.701975   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:23.701985   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:23.702049   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:23.743925   70908 cri.go:89] found id: ""
	I0311 21:38:23.743955   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.743964   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:23.743970   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:23.744046   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:23.784030   70908 cri.go:89] found id: ""
	I0311 21:38:23.784055   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.784066   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:23.784073   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:23.784132   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:23.823054   70908 cri.go:89] found id: ""
	I0311 21:38:23.823081   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.823089   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:23.823097   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:23.823156   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:23.863629   70908 cri.go:89] found id: ""
	I0311 21:38:23.863654   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.863662   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:23.863668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:23.863724   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:23.904429   70908 cri.go:89] found id: ""
	I0311 21:38:23.904454   70908 logs.go:276] 0 containers: []
	W0311 21:38:23.904462   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:23.904470   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:23.904481   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:23.962356   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:23.962393   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:23.977667   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:23.977689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:24.068791   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:24.068820   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:24.068835   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:24.157857   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:24.157892   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:26.147447   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.148069   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.450243   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:28.950495   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:27.530194   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:30.029286   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:26.705872   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:26.720840   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:26.720936   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:26.766449   70908 cri.go:89] found id: ""
	I0311 21:38:26.766480   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.766490   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:26.766496   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:26.766557   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:26.806179   70908 cri.go:89] found id: ""
	I0311 21:38:26.806203   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.806210   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:26.806216   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:26.806275   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:26.850737   70908 cri.go:89] found id: ""
	I0311 21:38:26.850765   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.850775   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:26.850785   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:26.850845   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:26.897694   70908 cri.go:89] found id: ""
	I0311 21:38:26.897722   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.897733   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:26.897744   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:26.897802   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:26.940940   70908 cri.go:89] found id: ""
	I0311 21:38:26.940962   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.940969   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:26.940975   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:26.941021   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:26.978576   70908 cri.go:89] found id: ""
	I0311 21:38:26.978604   70908 logs.go:276] 0 containers: []
	W0311 21:38:26.978614   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:26.978625   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:26.978682   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:27.016331   70908 cri.go:89] found id: ""
	I0311 21:38:27.016363   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.016374   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:27.016381   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:27.016439   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:27.061541   70908 cri.go:89] found id: ""
	I0311 21:38:27.061569   70908 logs.go:276] 0 containers: []
	W0311 21:38:27.061580   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:27.061590   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:27.061609   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:27.154977   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:27.155017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:27.204458   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:27.204488   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:27.259960   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:27.259997   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:27.277806   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:27.277832   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:27.356111   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:29.856828   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:29.871331   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:29.871413   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:29.912867   70908 cri.go:89] found id: ""
	I0311 21:38:29.912895   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.912904   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:29.912910   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:29.912973   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:29.953458   70908 cri.go:89] found id: ""
	I0311 21:38:29.953483   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.953491   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:29.953497   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:29.953553   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:29.997873   70908 cri.go:89] found id: ""
	I0311 21:38:29.997904   70908 logs.go:276] 0 containers: []
	W0311 21:38:29.997912   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:29.997921   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:29.997983   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:30.038831   70908 cri.go:89] found id: ""
	I0311 21:38:30.038861   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.038872   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:30.038880   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:30.038940   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:30.082089   70908 cri.go:89] found id: ""
	I0311 21:38:30.082117   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.082127   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:30.082135   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:30.082213   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:30.121167   70908 cri.go:89] found id: ""
	I0311 21:38:30.121198   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.121209   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:30.121216   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:30.121274   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:30.162342   70908 cri.go:89] found id: ""
	I0311 21:38:30.162371   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.162380   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:30.162393   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:30.162452   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:30.201727   70908 cri.go:89] found id: ""
	I0311 21:38:30.201753   70908 logs.go:276] 0 containers: []
	W0311 21:38:30.201761   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:30.201769   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:30.201780   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:30.283314   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:30.283346   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:30.333900   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:30.333930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:30.391761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:30.391798   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:30.407907   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:30.407930   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:30.489560   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:30.646773   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.649048   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:31.456251   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:33.951315   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.529160   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:34.530183   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:32.989976   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:33.004724   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:33.004814   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:33.049701   70908 cri.go:89] found id: ""
	I0311 21:38:33.049733   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.049743   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:33.049753   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:33.049823   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:33.097759   70908 cri.go:89] found id: ""
	I0311 21:38:33.097792   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.097804   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:33.097811   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:33.097875   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:33.143257   70908 cri.go:89] found id: ""
	I0311 21:38:33.143291   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.143300   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:33.143308   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:33.143376   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:33.187434   70908 cri.go:89] found id: ""
	I0311 21:38:33.187464   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.187477   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:33.187483   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:33.187558   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:33.236201   70908 cri.go:89] found id: ""
	I0311 21:38:33.236230   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.236239   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:33.236245   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:33.236312   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:33.279710   70908 cri.go:89] found id: ""
	I0311 21:38:33.279783   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.279816   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:33.279830   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:33.279898   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:33.325022   70908 cri.go:89] found id: ""
	I0311 21:38:33.325053   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.325064   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:33.325072   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:33.325138   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:33.368588   70908 cri.go:89] found id: ""
	I0311 21:38:33.368614   70908 logs.go:276] 0 containers: []
	W0311 21:38:33.368622   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:33.368629   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:33.368640   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:33.427761   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:33.427801   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:33.444440   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:33.444472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:33.527745   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:33.527764   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:33.527775   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:33.608215   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:33.608248   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:35.146541   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:37.146917   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.450175   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:38.949371   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.531125   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:39.028780   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:36.158253   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:36.172370   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:36.172438   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:36.216905   70908 cri.go:89] found id: ""
	I0311 21:38:36.216935   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.216945   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:36.216951   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:36.216996   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:36.260844   70908 cri.go:89] found id: ""
	I0311 21:38:36.260875   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.260885   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:36.260890   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:36.260941   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:36.306730   70908 cri.go:89] found id: ""
	I0311 21:38:36.306755   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.306767   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:36.306772   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:36.306820   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:36.346957   70908 cri.go:89] found id: ""
	I0311 21:38:36.346993   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.347004   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:36.347012   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:36.347082   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:36.392265   70908 cri.go:89] found id: ""
	I0311 21:38:36.392295   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.392306   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:36.392313   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:36.392379   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:36.433383   70908 cri.go:89] found id: ""
	I0311 21:38:36.433407   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.433414   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:36.433421   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:36.433467   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:36.471291   70908 cri.go:89] found id: ""
	I0311 21:38:36.471325   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.471336   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:36.471344   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:36.471411   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:36.514662   70908 cri.go:89] found id: ""
	I0311 21:38:36.514688   70908 logs.go:276] 0 containers: []
	W0311 21:38:36.514698   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:36.514708   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:36.514722   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:36.533222   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:36.533251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:36.616359   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:36.616384   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:36.616400   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:36.719105   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:36.719137   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:36.771125   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:36.771156   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.324847   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:39.341149   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:39.341218   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:39.380284   70908 cri.go:89] found id: ""
	I0311 21:38:39.380324   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.380335   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:39.380343   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:39.380407   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:39.429860   70908 cri.go:89] found id: ""
	I0311 21:38:39.429886   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.429894   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:39.429899   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:39.429960   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:39.468089   70908 cri.go:89] found id: ""
	I0311 21:38:39.468113   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.468121   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:39.468127   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:39.468188   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:39.508589   70908 cri.go:89] found id: ""
	I0311 21:38:39.508617   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.508628   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:39.508636   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:39.508695   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:39.552427   70908 cri.go:89] found id: ""
	I0311 21:38:39.552451   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.552459   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:39.552464   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:39.552511   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:39.592586   70908 cri.go:89] found id: ""
	I0311 21:38:39.592607   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.592615   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:39.592621   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:39.592670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:39.637138   70908 cri.go:89] found id: ""
	I0311 21:38:39.637167   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.637178   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:39.637186   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:39.637248   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:39.679422   70908 cri.go:89] found id: ""
	I0311 21:38:39.679457   70908 logs.go:276] 0 containers: []
	W0311 21:38:39.679470   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:39.679482   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:39.679499   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:39.734815   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:39.734850   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:39.750448   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:39.750472   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:39.832912   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:39.832936   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:39.832951   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:39.924020   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:39.924061   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:39.648759   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.146226   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:40.950021   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:42.951344   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:41.528407   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529130   70458 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:43.529166   70458 pod_ready.go:81] duration metric: took 4m0.007627735s for pod "metrics-server-57f55c9bc5-nv4gd" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:43.529179   70458 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0311 21:38:43.529188   70458 pod_ready.go:38] duration metric: took 4m4.551429192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:43.529207   70458 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:38:43.529242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:43.529306   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:43.589292   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:43.589314   70458 cri.go:89] found id: ""
	I0311 21:38:43.589323   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:43.589388   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.595182   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:43.595267   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:43.645002   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:43.645027   70458 cri.go:89] found id: ""
	I0311 21:38:43.645036   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:43.645088   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.650463   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:43.650537   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:43.693876   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:43.693894   70458 cri.go:89] found id: ""
	I0311 21:38:43.693902   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:43.693958   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.699273   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:43.699340   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:43.752552   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:43.752585   70458 cri.go:89] found id: ""
	I0311 21:38:43.752596   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:43.752667   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.758307   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:43.758384   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:43.802761   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:43.802789   70458 cri.go:89] found id: ""
	I0311 21:38:43.802798   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:43.802858   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.807796   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:43.807867   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:43.853820   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:43.853843   70458 cri.go:89] found id: ""
	I0311 21:38:43.853851   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:43.853907   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.859377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:43.859451   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:43.910605   70458 cri.go:89] found id: ""
	I0311 21:38:43.910640   70458 logs.go:276] 0 containers: []
	W0311 21:38:43.910648   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:43.910655   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:43.910702   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:43.955602   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:43.955624   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:43.955629   70458 cri.go:89] found id: ""
	I0311 21:38:43.955645   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:43.955713   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.960856   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:43.965889   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:43.965919   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:44.013879   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:44.013908   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:44.064641   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:44.064669   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:44.118095   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:44.118120   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:44.177775   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:44.177819   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.242090   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:44.242129   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:44.261628   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:44.261665   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:44.322616   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:44.322656   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:44.388117   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:44.388159   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:44.445980   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:44.446018   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:44.980199   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:44.980243   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:45.138312   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:45.138368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:45.208626   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:45.208664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:42.472932   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:42.488034   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:42.488090   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:42.530945   70908 cri.go:89] found id: ""
	I0311 21:38:42.530971   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.530981   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:42.530989   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:42.531053   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:42.571906   70908 cri.go:89] found id: ""
	I0311 21:38:42.571939   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.571951   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:42.571960   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:42.572029   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:42.613198   70908 cri.go:89] found id: ""
	I0311 21:38:42.613228   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.613239   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:42.613247   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:42.613330   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:42.654740   70908 cri.go:89] found id: ""
	I0311 21:38:42.654762   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.654770   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:42.654775   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:42.654821   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:42.694797   70908 cri.go:89] found id: ""
	I0311 21:38:42.694836   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.694847   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:42.694854   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:42.694931   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:42.738918   70908 cri.go:89] found id: ""
	I0311 21:38:42.738946   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.738958   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:42.738965   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:42.739032   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:42.780836   70908 cri.go:89] found id: ""
	I0311 21:38:42.780870   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.780881   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:42.780888   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:42.780943   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:42.824672   70908 cri.go:89] found id: ""
	I0311 21:38:42.824701   70908 logs.go:276] 0 containers: []
	W0311 21:38:42.824712   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:42.824721   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:42.824747   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:42.877219   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:42.877253   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:42.934996   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:42.935033   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:42.952125   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:42.952152   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:43.036657   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:43.036678   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:43.036695   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:45.629959   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:45.648501   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:45.648581   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:45.690083   70908 cri.go:89] found id: ""
	I0311 21:38:45.690117   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.690128   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:45.690136   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:45.690201   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:45.736497   70908 cri.go:89] found id: ""
	I0311 21:38:45.736519   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.736526   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:45.736531   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:45.736576   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:45.778590   70908 cri.go:89] found id: ""
	I0311 21:38:45.778625   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.778636   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:45.778645   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:45.778723   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:45.822322   70908 cri.go:89] found id: ""
	I0311 21:38:45.822351   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.822359   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:45.822365   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:45.822419   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:45.868591   70908 cri.go:89] found id: ""
	I0311 21:38:45.868618   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.868627   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:45.868633   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:45.868680   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:45.915137   70908 cri.go:89] found id: ""
	I0311 21:38:45.915165   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.915178   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:45.915187   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:45.915258   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:45.960432   70908 cri.go:89] found id: ""
	I0311 21:38:45.960459   70908 logs.go:276] 0 containers: []
	W0311 21:38:45.960469   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:45.960476   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:45.960529   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:46.006089   70908 cri.go:89] found id: ""
	I0311 21:38:46.006168   70908 logs.go:276] 0 containers: []
	W0311 21:38:46.006185   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:46.006195   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:46.006209   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:44.153091   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.650654   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:44.951550   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:46.952791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:47.756629   70458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:47.776613   70458 api_server.go:72] duration metric: took 4m14.182101385s to wait for apiserver process to appear ...
	I0311 21:38:47.776651   70458 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:38:47.776691   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:47.776774   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:47.826534   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:47.826553   70458 cri.go:89] found id: ""
	I0311 21:38:47.826560   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:47.826609   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.831565   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:47.831637   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:47.876504   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:47.876531   70458 cri.go:89] found id: ""
	I0311 21:38:47.876541   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:47.876598   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.882130   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:47.882224   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:47.930064   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:47.930087   70458 cri.go:89] found id: ""
	I0311 21:38:47.930096   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:47.930139   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.935357   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:47.935433   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:47.989169   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:47.989196   70458 cri.go:89] found id: ""
	I0311 21:38:47.989206   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:47.989262   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:47.994341   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:47.994401   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:48.037592   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:48.037619   70458 cri.go:89] found id: ""
	I0311 21:38:48.037629   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:48.037692   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.043377   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:48.043453   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:48.088629   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.088651   70458 cri.go:89] found id: ""
	I0311 21:38:48.088671   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:48.088722   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.093944   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:48.094016   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:48.144943   70458 cri.go:89] found id: ""
	I0311 21:38:48.144971   70458 logs.go:276] 0 containers: []
	W0311 21:38:48.144983   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:48.144990   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:48.145050   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:48.188857   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:48.188877   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.188881   70458 cri.go:89] found id: ""
	I0311 21:38:48.188887   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:48.188934   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.195123   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:48.200643   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:48.200673   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:48.246864   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:48.246894   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:48.715510   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:48.715545   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:48.775676   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:48.775716   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:48.793121   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:48.793157   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:48.863992   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:48.864040   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:48.922775   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:48.922810   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:48.996820   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:48.996866   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:49.045065   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.045097   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:49.199072   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:49.199137   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:49.283329   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:49.283360   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:49.340461   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:49.340502   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:49.391436   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.391460   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:46.064257   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:46.064296   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:46.080304   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:46.080337   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:46.177978   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:46.178001   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:46.178017   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:46.265260   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:46.265298   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:48.814221   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:48.835695   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:48.835793   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:48.898391   70908 cri.go:89] found id: ""
	I0311 21:38:48.898418   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.898429   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:48.898437   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:48.898501   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:48.972552   70908 cri.go:89] found id: ""
	I0311 21:38:48.972596   70908 logs.go:276] 0 containers: []
	W0311 21:38:48.972607   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:48.972617   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:48.972684   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:49.022346   70908 cri.go:89] found id: ""
	I0311 21:38:49.022371   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.022379   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:49.022384   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:49.022430   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:49.078415   70908 cri.go:89] found id: ""
	I0311 21:38:49.078444   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.078455   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:49.078463   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:49.078526   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:49.119369   70908 cri.go:89] found id: ""
	I0311 21:38:49.119402   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.119412   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:49.119420   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:49.119497   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:49.169866   70908 cri.go:89] found id: ""
	I0311 21:38:49.169897   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.169908   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:49.169916   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:49.169978   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:49.223619   70908 cri.go:89] found id: ""
	I0311 21:38:49.223642   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.223650   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:49.223656   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:49.223704   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:49.278499   70908 cri.go:89] found id: ""
	I0311 21:38:49.278531   70908 logs.go:276] 0 containers: []
	W0311 21:38:49.278542   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:49.278551   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:49.278563   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:49.294734   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:49.294760   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:49.390223   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:49.390252   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:49.390267   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:49.481214   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:49.481250   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:49.530285   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:49.530321   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:49.149825   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.648269   70604 pod_ready.go:102] pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.140832   70604 pod_ready.go:81] duration metric: took 4m0.000856291s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" ...
	E0311 21:38:53.140873   70604 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-7qw98" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:38:53.140895   70604 pod_ready.go:38] duration metric: took 4m13.032115697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:38:53.140925   70604 kubeadm.go:591] duration metric: took 4m21.406945055s to restartPrimaryControlPlane
	W0311 21:38:53.140993   70604 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:53.141028   70604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:49.450738   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.950491   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:53.952209   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:51.955522   70458 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0311 21:38:51.961814   70458 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0311 21:38:51.963188   70458 api_server.go:141] control plane version: v1.29.0-rc.2
	I0311 21:38:51.963209   70458 api_server.go:131] duration metric: took 4.186550701s to wait for apiserver health ...
	I0311 21:38:51.963218   70458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:38:51.963242   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:51.963294   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.020708   70458 cri.go:89] found id: "1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.020727   70458 cri.go:89] found id: ""
	I0311 21:38:52.020746   70458 logs.go:276] 1 containers: [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902]
	I0311 21:38:52.020815   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.026606   70458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.026668   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.072045   70458 cri.go:89] found id: "c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.072063   70458 cri.go:89] found id: ""
	I0311 21:38:52.072071   70458 logs.go:276] 1 containers: [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a]
	I0311 21:38:52.072130   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.078592   70458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.078771   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.139445   70458 cri.go:89] found id: "47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:52.139480   70458 cri.go:89] found id: ""
	I0311 21:38:52.139490   70458 logs.go:276] 1 containers: [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371]
	I0311 21:38:52.139548   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.148641   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.148724   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.199332   70458 cri.go:89] found id: "afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.199360   70458 cri.go:89] found id: ""
	I0311 21:38:52.199371   70458 logs.go:276] 1 containers: [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0]
	I0311 21:38:52.199433   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.207033   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.207096   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.267514   70458 cri.go:89] found id: "c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.267540   70458 cri.go:89] found id: ""
	I0311 21:38:52.267549   70458 logs.go:276] 1 containers: [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db]
	I0311 21:38:52.267615   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.274048   70458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.274132   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.330293   70458 cri.go:89] found id: "349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:52.330324   70458 cri.go:89] found id: ""
	I0311 21:38:52.330334   70458 logs.go:276] 1 containers: [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c]
	I0311 21:38:52.330395   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.336062   70458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.336143   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.381909   70458 cri.go:89] found id: ""
	I0311 21:38:52.381941   70458 logs.go:276] 0 containers: []
	W0311 21:38:52.381952   70458 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.381960   70458 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0311 21:38:52.382026   70458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0311 21:38:52.441879   70458 cri.go:89] found id: "21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.441908   70458 cri.go:89] found id: "8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.441919   70458 cri.go:89] found id: ""
	I0311 21:38:52.441928   70458 logs.go:276] 2 containers: [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001]
	I0311 21:38:52.441988   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.449288   70458 ssh_runner.go:195] Run: which crictl
	I0311 21:38:52.456632   70458 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.456664   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.526327   70458 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.526368   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.545008   70458 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.545035   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:52.699959   70458 logs.go:123] Gathering logs for kube-apiserver [1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902] ...
	I0311 21:38:52.699995   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed4ff4bec8a1f1d22db55d94ff468b194047f6add694f7c6afc8231f6bc1902"
	I0311 21:38:52.762045   70458 logs.go:123] Gathering logs for etcd [c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a] ...
	I0311 21:38:52.762079   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0cb4bf3e770c6586e27b723f3fab58cba5cc3abe0eb68d72030d837697f558a"
	I0311 21:38:52.828963   70458 logs.go:123] Gathering logs for kube-scheduler [afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0] ...
	I0311 21:38:52.829005   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afcbb2dc1ded0eaa92afe87267dca2fe48445caba1d5ec66087512f04a9106e0"
	I0311 21:38:52.874202   70458 logs.go:123] Gathering logs for kube-proxy [c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db] ...
	I0311 21:38:52.874237   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4b1f09c4c07d16831f1033763372ae1e7a486e86a7558926c261f71e41c48db"
	I0311 21:38:52.916842   70458 logs.go:123] Gathering logs for storage-provisioner [21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589] ...
	I0311 21:38:52.916872   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d8b522dbe037ed89ec2b20e0d6c0f39d951d584bcc750131bf5cee0355d589"
	I0311 21:38:52.969778   70458 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.969807   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:53.365097   70458 logs.go:123] Gathering logs for container status ...
	I0311 21:38:53.365147   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:53.446533   70458 logs.go:123] Gathering logs for coredns [47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371] ...
	I0311 21:38:53.446576   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a3cc73ba85addba7f723d2e724ea4790eed03af3fdb8d64c0dace4bea55371"
	I0311 21:38:53.500017   70458 logs.go:123] Gathering logs for kube-controller-manager [349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c] ...
	I0311 21:38:53.500043   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 349dc13986ab3d471ce921879a3573c8844a92097ba91f2a63c19080c042044c"
	I0311 21:38:53.572904   70458 logs.go:123] Gathering logs for storage-provisioner [8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001] ...
	I0311 21:38:53.572954   70458 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c5aec8c42b972b685fcfad7a000019e9aeafa53d05ca555151a9120c74c3001"
	I0311 21:38:52.087848   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:52.108284   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:52.108351   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:52.161648   70908 cri.go:89] found id: ""
	I0311 21:38:52.161680   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.161691   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:52.161698   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:52.161763   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:52.206552   70908 cri.go:89] found id: ""
	I0311 21:38:52.206577   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.206588   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:52.206596   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:52.206659   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:52.253954   70908 cri.go:89] found id: ""
	I0311 21:38:52.253984   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.253996   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:52.254004   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:52.254068   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:52.302343   70908 cri.go:89] found id: ""
	I0311 21:38:52.302384   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.302396   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:52.302404   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:52.302472   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:52.345581   70908 cri.go:89] found id: ""
	I0311 21:38:52.345608   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.345618   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:52.345624   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:52.345683   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:52.392502   70908 cri.go:89] found id: ""
	I0311 21:38:52.392531   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.392542   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:52.392549   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:52.392601   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:52.447625   70908 cri.go:89] found id: ""
	I0311 21:38:52.447651   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.447661   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:52.447668   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:52.447728   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:52.490965   70908 cri.go:89] found id: ""
	I0311 21:38:52.490994   70908 logs.go:276] 0 containers: []
	W0311 21:38:52.491007   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:52.491019   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:52.491034   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:52.539604   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:52.539650   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:52.597735   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:52.597771   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:52.617572   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:52.617610   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:38:52.706724   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:52.706753   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:52.706769   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.293550   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:55.313904   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:38:55.314005   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:38:55.368607   70908 cri.go:89] found id: ""
	I0311 21:38:55.368639   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.368647   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:38:55.368654   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:38:55.368714   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:38:55.434052   70908 cri.go:89] found id: ""
	I0311 21:38:55.434081   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.434092   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:38:55.434100   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:38:55.434189   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:38:55.483532   70908 cri.go:89] found id: ""
	I0311 21:38:55.483562   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.483572   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:38:55.483579   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:38:55.483647   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:38:55.528681   70908 cri.go:89] found id: ""
	I0311 21:38:55.528708   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.528721   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:38:55.528728   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:38:55.528825   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:38:55.583143   70908 cri.go:89] found id: ""
	I0311 21:38:55.583167   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.583174   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:38:55.583179   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:38:55.583240   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:38:55.636577   70908 cri.go:89] found id: ""
	I0311 21:38:55.636599   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.636607   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:38:55.636612   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:38:55.636670   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:38:55.697268   70908 cri.go:89] found id: ""
	I0311 21:38:55.697295   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.697306   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:38:55.697314   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:38:55.697374   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:38:55.749272   70908 cri.go:89] found id: ""
	I0311 21:38:55.749302   70908 logs.go:276] 0 containers: []
	W0311 21:38:55.749312   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:38:55.749322   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:38:55.749335   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:38:55.841581   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:38:55.841643   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 21:38:55.898537   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:38:55.898574   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:38:55.973278   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:38:55.973329   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:38:55.992958   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:38:55.992986   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 21:38:56.137313   70458 system_pods.go:59] 8 kube-system pods found
	I0311 21:38:56.137347   70458 system_pods.go:61] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.137354   70458 system_pods.go:61] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.137361   70458 system_pods.go:61] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.137366   70458 system_pods.go:61] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.137371   70458 system_pods.go:61] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.137375   70458 system_pods.go:61] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.137383   70458 system_pods.go:61] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.137390   70458 system_pods.go:61] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.137400   70458 system_pods.go:74] duration metric: took 4.174175629s to wait for pod list to return data ...
	I0311 21:38:56.137409   70458 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:38:56.140315   70458 default_sa.go:45] found service account: "default"
	I0311 21:38:56.140344   70458 default_sa.go:55] duration metric: took 2.92722ms for default service account to be created ...
	I0311 21:38:56.140356   70458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:38:56.146873   70458 system_pods.go:86] 8 kube-system pods found
	I0311 21:38:56.146912   70458 system_pods.go:89] "coredns-76f75df574-s6lsb" [b4f5daf9-7d52-475d-9341-09024dc7c8e7] Running
	I0311 21:38:56.146923   70458 system_pods.go:89] "etcd-no-preload-324578" [a1098b88-ea11-4745-9ddf-669111d1b201] Running
	I0311 21:38:56.146932   70458 system_pods.go:89] "kube-apiserver-no-preload-324578" [d48c7ad3-07fb-46d9-ae8c-e4f7afd58c86] Running
	I0311 21:38:56.146940   70458 system_pods.go:89] "kube-controller-manager-no-preload-324578" [1e921994-4c6c-4ab9-957d-c6ed12ce7a9e] Running
	I0311 21:38:56.146945   70458 system_pods.go:89] "kube-proxy-rmz4b" [81ec7a47-6b52-4133-bdc5-4dea57847900] Running
	I0311 21:38:56.146951   70458 system_pods.go:89] "kube-scheduler-no-preload-324578" [c59d63f7-28ab-4054-a9d0-c2b9bc2cc8e8] Running
	I0311 21:38:56.146960   70458 system_pods.go:89] "metrics-server-57f55c9bc5-nv4gd" [ae810c51-28bd-4c79-93ba-033f4767ba89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:38:56.146972   70458 system_pods.go:89] "storage-provisioner" [82fcc747-2962-4203-8ce5-25c2bb408a6d] Running
	I0311 21:38:56.146983   70458 system_pods.go:126] duration metric: took 6.619737ms to wait for k8s-apps to be running ...
	I0311 21:38:56.146998   70458 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:38:56.147056   70458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:56.165354   70458 system_svc.go:56] duration metric: took 18.346754ms WaitForService to wait for kubelet
	I0311 21:38:56.165387   70458 kubeadm.go:576] duration metric: took 4m22.570894549s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:38:56.165413   70458 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:38:56.168819   70458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:38:56.168845   70458 node_conditions.go:123] node cpu capacity is 2
	I0311 21:38:56.168856   70458 node_conditions.go:105] duration metric: took 3.437527ms to run NodePressure ...
	I0311 21:38:56.168868   70458 start.go:240] waiting for startup goroutines ...
	I0311 21:38:56.168875   70458 start.go:245] waiting for cluster config update ...
	I0311 21:38:56.168885   70458 start.go:254] writing updated cluster config ...
	I0311 21:38:56.169153   70458 ssh_runner.go:195] Run: rm -f paused
	I0311 21:38:56.225977   70458 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0311 21:38:56.228234   70458 out.go:177] * Done! kubectl is now configured to use "no-preload-324578" cluster and "default" namespace by default
	I0311 21:38:56.450729   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:38:58.450799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	W0311 21:38:56.084193   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:38:58.584354   70908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:38:58.604767   70908 kubeadm.go:591] duration metric: took 4m4.440744932s to restartPrimaryControlPlane
	W0311 21:38:58.604844   70908 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:38:58.604872   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:38:59.965834   70908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.36094005s)
	I0311 21:38:59.965906   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:38:59.982020   70908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:38:59.994794   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:00.007116   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:00.007138   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:00.007182   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:00.019744   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:00.019802   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:00.033311   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:00.045608   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:00.045685   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:00.059722   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.071140   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:00.071199   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:00.082635   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:00.093311   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:00.093374   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:00.104995   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:00.372164   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:00.950799   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:03.450080   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:05.949899   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:07.950640   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:10.450583   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:12.949481   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:14.950496   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:16.951064   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:18.958165   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:21.450609   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:23.949791   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:26.302837   70604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.161781704s)
	I0311 21:39:26.302921   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:26.319602   70604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:39:26.331483   70604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:39:26.343632   70604 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:39:26.343658   70604 kubeadm.go:156] found existing configuration files:
	
	I0311 21:39:26.343705   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:39:26.354863   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:39:26.354919   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:39:26.366087   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:39:26.377221   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:39:26.377282   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:39:26.389769   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.401201   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:39:26.401255   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:39:26.412357   70604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:39:26.423962   70604 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:39:26.424035   70604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:39:26.436189   70604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:39:26.672030   70604 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:39:25.952857   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:28.449272   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:30.450630   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:32.450912   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:35.908605   70604 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:39:35.908656   70604 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:39:35.908751   70604 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:39:35.908846   70604 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:39:35.908967   70604 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:39:35.909026   70604 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:39:35.910690   70604 out.go:204]   - Generating certificates and keys ...
	I0311 21:39:35.910785   70604 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:39:35.910849   70604 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:39:35.910952   70604 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:39:35.911039   70604 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:39:35.911106   70604 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:39:35.911177   70604 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:39:35.911268   70604 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:39:35.911353   70604 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:39:35.911449   70604 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:39:35.911551   70604 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:39:35.911604   70604 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:39:35.911689   70604 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:39:35.911762   70604 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:39:35.911869   70604 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:39:35.911974   70604 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:39:35.912067   70604 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:39:35.912217   70604 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:39:35.912320   70604 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:39:35.914908   70604 out.go:204]   - Booting up control plane ...
	I0311 21:39:35.915026   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:39:35.915126   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:39:35.915216   70604 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:39:35.915321   70604 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:39:35.915431   70604 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:39:35.915487   70604 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:39:35.915659   70604 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:39:35.915792   70604 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503325 seconds
	I0311 21:39:35.915925   70604 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:39:35.916039   70604 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:39:35.916091   70604 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:39:35.916314   70604 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-743937 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:39:35.916408   70604 kubeadm.go:309] [bootstrap-token] Using token: hxeoeg.f2scq51qa57vwzwt
	I0311 21:39:35.917880   70604 out.go:204]   - Configuring RBAC rules ...
	I0311 21:39:35.917995   70604 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:39:35.918093   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:39:35.918297   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:39:35.918490   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:39:35.918629   70604 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:39:35.918745   70604 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:39:35.918907   70604 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:39:35.918974   70604 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:39:35.919031   70604 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:39:35.919048   70604 kubeadm.go:309] 
	I0311 21:39:35.919118   70604 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:39:35.919128   70604 kubeadm.go:309] 
	I0311 21:39:35.919225   70604 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:39:35.919236   70604 kubeadm.go:309] 
	I0311 21:39:35.919266   70604 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:39:35.919344   70604 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:39:35.919405   70604 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:39:35.919412   70604 kubeadm.go:309] 
	I0311 21:39:35.919461   70604 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:39:35.919467   70604 kubeadm.go:309] 
	I0311 21:39:35.919505   70604 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:39:35.919511   70604 kubeadm.go:309] 
	I0311 21:39:35.919553   70604 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:39:35.919640   70604 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:39:35.919727   70604 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:39:35.919736   70604 kubeadm.go:309] 
	I0311 21:39:35.919835   70604 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:39:35.919949   70604 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:39:35.919964   70604 kubeadm.go:309] 
	I0311 21:39:35.920071   70604 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920172   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:39:35.920193   70604 kubeadm.go:309] 	--control-plane 
	I0311 21:39:35.920199   70604 kubeadm.go:309] 
	I0311 21:39:35.920271   70604 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:39:35.920280   70604 kubeadm.go:309] 
	I0311 21:39:35.920349   70604 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hxeoeg.f2scq51qa57vwzwt \
	I0311 21:39:35.920479   70604 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:39:35.920507   70604 cni.go:84] Creating CNI manager for ""
	I0311 21:39:35.920517   70604 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:39:35.922125   70604 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:39:35.923386   70604 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:39:35.955828   70604 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:39:36.065309   70604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:39:36.065389   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.065408   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-743937 minikube.k8s.io/updated_at=2024_03_11T21_39_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=embed-certs-743937 minikube.k8s.io/primary=true
	I0311 21:39:36.370945   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:36.370961   70604 ops.go:34] apiserver oom_adj: -16
	I0311 21:39:36.871194   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.371937   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:37.871974   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.371330   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:38.871791   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:34.949300   70417 pod_ready.go:102] pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace has status "Ready":"False"
	I0311 21:39:36.942990   70417 pod_ready.go:81] duration metric: took 4m0.000574155s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" ...
	E0311 21:39:36.943022   70417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-kxl6n" in "kube-system" namespace to be "Ready" (will not retry!)
	I0311 21:39:36.943043   70417 pod_ready.go:38] duration metric: took 4m12.043798271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:36.943093   70417 kubeadm.go:591] duration metric: took 4m20.121624644s to restartPrimaryControlPlane
	W0311 21:39:36.943155   70417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 21:39:36.943183   70417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:39:39.371531   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:39.872032   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.371717   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:40.871615   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.371577   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:41.871841   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.371050   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:42.871044   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.371446   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:43.871815   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.371243   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:44.872056   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.371993   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:45.871213   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.371397   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:46.871185   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.371541   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.871121   70604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:39:47.971855   70604 kubeadm.go:1106] duration metric: took 11.906533451s to wait for elevateKubeSystemPrivileges
	W0311 21:39:47.971895   70604 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:39:47.971902   70604 kubeadm.go:393] duration metric: took 5m16.305518086s to StartCluster
	I0311 21:39:47.971917   70604 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.972003   70604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:39:47.974339   70604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:39:47.974576   70604 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:39:47.976309   70604 out.go:177] * Verifying Kubernetes components...
	I0311 21:39:47.974638   70604 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:39:47.974819   70604 config.go:182] Loaded profile config "embed-certs-743937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:39:47.977737   70604 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-743937"
	I0311 21:39:47.977746   70604 addons.go:69] Setting default-storageclass=true in profile "embed-certs-743937"
	I0311 21:39:47.977779   70604 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-743937"
	W0311 21:39:47.977790   70604 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:39:47.977815   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.977740   70604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:39:47.977779   70604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-743937"
	I0311 21:39:47.977750   70604 addons.go:69] Setting metrics-server=true in profile "embed-certs-743937"
	I0311 21:39:47.977943   70604 addons.go:234] Setting addon metrics-server=true in "embed-certs-743937"
	W0311 21:39:47.977957   70604 addons.go:243] addon metrics-server should already be in state true
	I0311 21:39:47.977985   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978241   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978270   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978275   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.978419   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.978449   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.994019   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I0311 21:39:47.994131   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I0311 21:39:47.994484   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994514   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.994964   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.994983   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995128   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.995143   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.995288   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0311 21:39:47.995437   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995506   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.995583   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:47.996051   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.996073   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.996516   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:47.996999   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:47.997024   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:47.997383   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:47.997834   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.997858   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:47.999381   70604 addons.go:234] Setting addon default-storageclass=true in "embed-certs-743937"
	W0311 21:39:47.999406   70604 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:39:47.999432   70604 host.go:66] Checking if "embed-certs-743937" exists ...
	I0311 21:39:47.999794   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:47.999823   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.012063   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0311 21:39:48.012470   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.012899   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.012923   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.013267   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.013334   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0311 21:39:48.013484   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.013767   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.014259   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.014279   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.014556   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.014752   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.015486   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.017650   70604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:39:48.016591   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.019717   70604 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.019736   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:39:48.019758   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.021823   70604 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:39:48.023083   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:39:48.023095   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:39:48.023108   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.023306   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.023589   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0311 21:39:48.023916   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.023937   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.024255   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.024412   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.024533   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.024653   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.025517   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.025955   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.025967   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.026292   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.027365   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.027654   70604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:39:48.027692   70604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:39:48.027909   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.027965   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.028188   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.028369   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.028496   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.028603   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.048933   70604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0311 21:39:48.049338   70604 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:39:48.049918   70604 main.go:141] libmachine: Using API Version  1
	I0311 21:39:48.049929   70604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:39:48.050342   70604 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:39:48.050502   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetState
	I0311 21:39:48.052274   70604 main.go:141] libmachine: (embed-certs-743937) Calling .DriverName
	I0311 21:39:48.052523   70604 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.052537   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:39:48.052554   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHHostname
	I0311 21:39:48.055438   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.055864   70604 main.go:141] libmachine: (embed-certs-743937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:b4:7a", ip: ""} in network mk-embed-certs-743937: {Iface:virbr2 ExpiryTime:2024-03-11 22:34:15 +0000 UTC Type:0 Mac:52:54:00:84:b4:7a Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-743937 Clientid:01:52:54:00:84:b4:7a}
	I0311 21:39:48.055881   70604 main.go:141] libmachine: (embed-certs-743937) DBG | domain embed-certs-743937 has defined IP address 192.168.50.114 and MAC address 52:54:00:84:b4:7a in network mk-embed-certs-743937
	I0311 21:39:48.056156   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHPort
	I0311 21:39:48.056334   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHKeyPath
	I0311 21:39:48.056495   70604 main.go:141] libmachine: (embed-certs-743937) Calling .GetSSHUsername
	I0311 21:39:48.056608   70604 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/embed-certs-743937/id_rsa Username:docker}
	I0311 21:39:48.175402   70604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:39:48.196199   70604 node_ready.go:35] waiting up to 6m0s for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215911   70604 node_ready.go:49] node "embed-certs-743937" has status "Ready":"True"
	I0311 21:39:48.215935   70604 node_ready.go:38] duration metric: took 19.701474ms for node "embed-certs-743937" to be "Ready" ...
	I0311 21:39:48.215945   70604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.223525   70604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228887   70604 pod_ready.go:92] pod "etcd-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.228907   70604 pod_ready.go:81] duration metric: took 5.35597ms for pod "etcd-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.228917   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233811   70604 pod_ready.go:92] pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.233828   70604 pod_ready.go:81] duration metric: took 4.904721ms for pod "kube-apiserver-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.233839   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241831   70604 pod_ready.go:92] pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.241848   70604 pod_ready.go:81] duration metric: took 8.002663ms for pod "kube-controller-manager-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.241857   70604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247609   70604 pod_ready.go:92] pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace has status "Ready":"True"
	I0311 21:39:48.247633   70604 pod_ready.go:81] duration metric: took 5.767693ms for pod "kube-scheduler-embed-certs-743937" in "kube-system" namespace to be "Ready" ...
	I0311 21:39:48.247641   70604 pod_ready.go:38] duration metric: took 31.680305ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:39:48.247656   70604 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:39:48.247704   70604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:39:48.270201   70604 api_server.go:72] duration metric: took 295.596568ms to wait for apiserver process to appear ...
	I0311 21:39:48.270224   70604 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:39:48.270242   70604 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0311 21:39:48.277642   70604 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0311 21:39:48.280487   70604 api_server.go:141] control plane version: v1.28.4
	I0311 21:39:48.280505   70604 api_server.go:131] duration metric: took 10.273204ms to wait for apiserver health ...
	I0311 21:39:48.280514   70604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:39:48.343718   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:39:48.346848   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:39:48.346864   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:39:48.400878   70604 system_pods.go:59] 4 kube-system pods found
	I0311 21:39:48.400907   70604 system_pods.go:61] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.400913   70604 system_pods.go:61] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.400919   70604 system_pods.go:61] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.400923   70604 system_pods.go:61] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.400931   70604 system_pods.go:74] duration metric: took 120.410888ms to wait for pod list to return data ...
	I0311 21:39:48.400940   70604 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:39:48.401062   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:39:48.401083   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:39:48.406115   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:39:48.492018   70604 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.492042   70604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:39:48.581187   70604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:39:48.602016   70604 default_sa.go:45] found service account: "default"
	I0311 21:39:48.602046   70604 default_sa.go:55] duration metric: took 201.097662ms for default service account to be created ...
	I0311 21:39:48.602056   70604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:39:48.862115   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:48.862148   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending
	I0311 21:39:48.862155   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending
	I0311 21:39:48.862159   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:48.862164   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:48.862169   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:48.862176   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:48.862180   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:48.862199   70604 retry.go:31] will retry after 266.08114ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.139648   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.139675   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139682   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.139689   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.139694   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.139700   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.139706   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.139710   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.139724   70604 retry.go:31] will retry after 293.420416ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.476384   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.476411   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476418   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.476423   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.476429   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.476433   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.476438   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.476442   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.476456   70604 retry.go:31] will retry after 439.10065ms: missing components: kube-dns, kube-proxy
	I0311 21:39:49.927298   70604 system_pods.go:86] 7 kube-system pods found
	I0311 21:39:49.927337   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927348   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:49.927357   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:49.927366   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:49.927373   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:49.927381   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0311 21:39:49.927389   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:49.927411   70604 retry.go:31] will retry after 396.604462ms: missing components: kube-dns, kube-proxy
	I0311 21:39:50.092631   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68647s)
	I0311 21:39:50.092698   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.092718   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093147   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093200   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093223   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093241   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093280   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.749522465s)
	I0311 21:39:50.093321   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093336   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.093507   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093529   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093746   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.093759   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.093773   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.093797   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.093805   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.094040   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.094041   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.094067   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.111807   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.111831   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.112109   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.112127   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.112132   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.291598   70604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.710367476s)
	I0311 21:39:50.291651   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.291671   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292020   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292036   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292044   70604 main.go:141] libmachine: Making call to close driver server
	I0311 21:39:50.292050   70604 main.go:141] libmachine: (embed-certs-743937) Calling .Close
	I0311 21:39:50.292287   70604 main.go:141] libmachine: (embed-certs-743937) DBG | Closing plugin on server side
	I0311 21:39:50.292328   70604 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:39:50.292352   70604 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:39:50.292367   70604 addons.go:470] Verifying addon metrics-server=true in "embed-certs-743937"
	I0311 21:39:50.294192   70604 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0311 21:39:50.295405   70604 addons.go:505] duration metric: took 2.320766016s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0311 21:39:50.339623   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:50.339651   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339658   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:50.339665   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:50.339671   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:50.339677   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:50.339682   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:50.339688   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:50.339695   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:50.339704   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:50.339728   70604 retry.go:31] will retry after 674.573171ms: missing components: kube-dns
	I0311 21:39:51.021666   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.021704   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021716   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.021723   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.021731   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.021743   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.021754   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.021760   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.021772   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.021786   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0311 21:39:51.021805   70604 retry.go:31] will retry after 716.470399ms: missing components: kube-dns
	I0311 21:39:51.745786   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:51.745818   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745829   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0311 21:39:51.745840   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:51.745849   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:51.745855   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:51.745861   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:51.745867   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:51.745876   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:51.745886   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:51.745904   70604 retry.go:31] will retry after 873.920018ms: missing components: kube-dns
	I0311 21:39:52.627896   70604 system_pods.go:86] 9 kube-system pods found
	I0311 21:39:52.627922   70604 system_pods.go:89] "coredns-5dd5756b68-58ct4" [96fa2415-2468-4a6d-887f-5eb6e455bbea] Running
	I0311 21:39:52.627927   70604 system_pods.go:89] "coredns-5dd5756b68-hct77" [ca63b9a2-afdf-4dbf-93db-2a22a3fa2a31] Running
	I0311 21:39:52.627932   70604 system_pods.go:89] "etcd-embed-certs-743937" [72098f9e-045b-42b7-8934-1c49e1ae39fb] Running
	I0311 21:39:52.627936   70604 system_pods.go:89] "kube-apiserver-embed-certs-743937" [029a611a-f6db-498b-8039-f5f3641c08e4] Running
	I0311 21:39:52.627941   70604 system_pods.go:89] "kube-controller-manager-embed-certs-743937" [b1e40f46-078f-4d23-b3c0-0bff5f7cbd2a] Running
	I0311 21:39:52.627944   70604 system_pods.go:89] "kube-proxy-7xmlm" [f18fd74c-17fa-44f1-a7e4-ab19fffe497b] Running
	I0311 21:39:52.627948   70604 system_pods.go:89] "kube-scheduler-embed-certs-743937" [85ec5219-c94a-43fc-ad4b-039187cfa618] Running
	I0311 21:39:52.627954   70604 system_pods.go:89] "metrics-server-57f55c9bc5-9z7nz" [6a161d6c-584f-47ef-86f2-40e7870d372e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:39:52.627958   70604 system_pods.go:89] "storage-provisioner" [2096cbb5-d96f-48f5-a04a-eb596646c8ed] Running
	I0311 21:39:52.627966   70604 system_pods.go:126] duration metric: took 4.025903884s to wait for k8s-apps to be running ...
	I0311 21:39:52.627976   70604 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:39:52.628017   70604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:39:52.643356   70604 system_svc.go:56] duration metric: took 15.371853ms WaitForService to wait for kubelet
	I0311 21:39:52.643378   70604 kubeadm.go:576] duration metric: took 4.668777182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:39:52.643394   70604 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:39:52.646844   70604 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:39:52.646862   70604 node_conditions.go:123] node cpu capacity is 2
	I0311 21:39:52.646871   70604 node_conditions.go:105] duration metric: took 3.47245ms to run NodePressure ...
	I0311 21:39:52.646881   70604 start.go:240] waiting for startup goroutines ...
	I0311 21:39:52.646891   70604 start.go:245] waiting for cluster config update ...
	I0311 21:39:52.646904   70604 start.go:254] writing updated cluster config ...
	I0311 21:39:52.647207   70604 ssh_runner.go:195] Run: rm -f paused
	I0311 21:39:52.697687   70604 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:39:52.699641   70604 out.go:177] * Done! kubectl is now configured to use "embed-certs-743937" cluster and "default" namespace by default
	I0311 21:40:09.411155   70417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.467938624s)
	I0311 21:40:09.411245   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:09.429951   70417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 21:40:09.442265   70417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:09.453883   70417 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:09.453899   70417 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:09.453934   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0311 21:40:09.465106   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:09.465161   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:09.476155   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0311 21:40:09.487366   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:09.487413   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:09.497877   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.508056   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:09.508096   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:09.518709   70417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0311 21:40:09.529005   70417 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:09.529039   70417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:09.539755   70417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:09.601265   70417 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 21:40:09.601399   70417 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:09.771387   70417 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:09.771548   70417 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:09.771653   70417 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:10.016610   70417 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:10.018526   70417 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:10.018613   70417 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:10.018670   70417 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:10.018752   70417 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:10.018830   70417 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:10.018926   70417 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:10.019019   70417 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:10.019436   70417 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:10.019924   70417 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:10.020435   70417 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:10.020949   70417 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:10.021470   70417 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:10.021550   70417 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:10.087827   70417 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:10.326702   70417 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:10.515476   70417 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:10.585573   70417 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:10.586277   70417 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:10.588784   70417 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:10.590786   70417 out.go:204]   - Booting up control plane ...
	I0311 21:40:10.590969   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:10.591080   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:10.591164   70417 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:10.613086   70417 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:10.613187   70417 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:10.613224   70417 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:10.753737   70417 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:17.258016   70417 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503151 seconds
	I0311 21:40:17.258170   70417 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 21:40:17.276142   70417 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 21:40:17.805116   70417 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 21:40:17.805383   70417 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-766430 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 21:40:18.323836   70417 kubeadm.go:309] [bootstrap-token] Using token: 9sjslg.sf5b1bfk3wp77z35
	I0311 21:40:18.325382   70417 out.go:204]   - Configuring RBAC rules ...
	I0311 21:40:18.325478   70417 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 21:40:18.331585   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 21:40:18.344341   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 21:40:18.348362   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 21:40:18.352181   70417 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 21:40:18.363299   70417 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 21:40:18.377835   70417 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 21:40:18.612013   70417 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 21:40:18.755215   70417 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 21:40:18.755235   70417 kubeadm.go:309] 
	I0311 21:40:18.755300   70417 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 21:40:18.755314   70417 kubeadm.go:309] 
	I0311 21:40:18.755434   70417 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 21:40:18.755460   70417 kubeadm.go:309] 
	I0311 21:40:18.755490   70417 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 21:40:18.755571   70417 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 21:40:18.755636   70417 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 21:40:18.755647   70417 kubeadm.go:309] 
	I0311 21:40:18.755721   70417 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 21:40:18.755731   70417 kubeadm.go:309] 
	I0311 21:40:18.755794   70417 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 21:40:18.755804   70417 kubeadm.go:309] 
	I0311 21:40:18.755876   70417 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 21:40:18.755941   70417 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 21:40:18.756010   70417 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 21:40:18.756029   70417 kubeadm.go:309] 
	I0311 21:40:18.756152   70417 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 21:40:18.756267   70417 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 21:40:18.756277   70417 kubeadm.go:309] 
	I0311 21:40:18.756391   70417 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.756533   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 \
	I0311 21:40:18.756578   70417 kubeadm.go:309] 	--control-plane 
	I0311 21:40:18.756585   70417 kubeadm.go:309] 
	I0311 21:40:18.756695   70417 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 21:40:18.756706   70417 kubeadm.go:309] 
	I0311 21:40:18.756844   70417 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 9sjslg.sf5b1bfk3wp77z35 \
	I0311 21:40:18.757021   70417 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7ba5dad12dadf0b6d45bebf6fac6fab21abfca6ae59dadd247cba23d24291054 
	I0311 21:40:18.759444   70417 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:40:18.759474   70417 cni.go:84] Creating CNI manager for ""
	I0311 21:40:18.759489   70417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 21:40:18.761354   70417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 21:40:18.762676   70417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 21:40:18.793496   70417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 21:40:18.840426   70417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:18.840508   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-766430 minikube.k8s.io/updated_at=2024_03_11T21_40_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=default-k8s-diff-port-766430 minikube.k8s.io/primary=true
	I0311 21:40:19.150012   70417 ops.go:34] apiserver oom_adj: -16
	I0311 21:40:19.150129   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:19.650947   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.150969   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:20.650687   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.150849   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:21.650356   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.150737   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:22.650225   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.150390   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:23.650650   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.151081   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:24.650689   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.150428   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:25.650265   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.150198   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:26.650610   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.150325   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:27.650794   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.150855   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:28.650819   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.150345   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:29.650746   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.150910   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.650742   70417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 21:40:30.790472   70417 kubeadm.go:1106] duration metric: took 11.95003413s to wait for elevateKubeSystemPrivileges
	W0311 21:40:30.790506   70417 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 21:40:30.790513   70417 kubeadm.go:393] duration metric: took 5m14.024392605s to StartCluster
	I0311 21:40:30.790527   70417 settings.go:142] acquiring lock: {Name:mkde2ab58ea887bdcb7cca21c8835296dd79af4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.790630   70417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:40:30.792582   70417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-11004/kubeconfig: {Name:mkd372d3af5034d3070c99d4cf3436fe481d34f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 21:40:30.792843   70417 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.11 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0311 21:40:30.794425   70417 out.go:177] * Verifying Kubernetes components...
	I0311 21:40:30.792920   70417 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 21:40:30.793051   70417 config.go:182] Loaded profile config "default-k8s-diff-port-766430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:40:30.796119   70417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 21:40:30.796129   70417 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796160   70417 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796171   70417 addons.go:243] addon metrics-server should already be in state true
	I0311 21:40:30.796197   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796121   70417 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796127   70417 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-766430"
	I0311 21:40:30.796237   70417 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.796253   70417 addons.go:243] addon storage-provisioner should already be in state true
	I0311 21:40:30.796268   70417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-766430"
	I0311 21:40:30.796278   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.796663   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796694   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796699   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796722   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.796777   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.796807   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.812156   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0311 21:40:30.812601   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.813108   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.813138   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.813532   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.813995   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.814031   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.816427   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0311 21:40:30.816626   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0311 21:40:30.816863   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817015   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.817365   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817385   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817532   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.817557   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.817905   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.817908   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.818696   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.819070   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.819100   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.822839   70417 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-766430"
	W0311 21:40:30.822858   70417 addons.go:243] addon default-storageclass should already be in state true
	I0311 21:40:30.822885   70417 host.go:66] Checking if "default-k8s-diff-port-766430" exists ...
	I0311 21:40:30.823188   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.823202   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.834007   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0311 21:40:30.834521   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.835017   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.835033   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.835418   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.835620   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.837838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.839548   70417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0311 21:40:30.838397   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0311 21:40:30.840244   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0311 21:40:30.840869   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 21:40:30.840885   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 21:40:30.840904   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.841295   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841345   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.841877   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.841894   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.841994   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.842012   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.842246   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.842448   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.842960   70417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 21:40:30.842985   70417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 21:40:30.844184   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.844406   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.845769   70417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 21:40:30.847105   70417 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:30.844838   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.847124   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 21:40:30.847142   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.845110   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.847151   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.847302   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.847424   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.847550   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.849856   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850205   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.850232   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.850414   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.850575   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.850697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.850835   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:30.861464   70417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0311 21:40:30.861799   70417 main.go:141] libmachine: () Calling .GetVersion
	I0311 21:40:30.862252   70417 main.go:141] libmachine: Using API Version  1
	I0311 21:40:30.862271   70417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 21:40:30.862655   70417 main.go:141] libmachine: () Calling .GetMachineName
	I0311 21:40:30.862818   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetState
	I0311 21:40:30.864692   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .DriverName
	I0311 21:40:30.864956   70417 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:30.864978   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 21:40:30.864996   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHHostname
	I0311 21:40:30.867548   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.867980   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:8d", ip: ""} in network mk-default-k8s-diff-port-766430: {Iface:virbr4 ExpiryTime:2024-03-11 22:34:58 +0000 UTC Type:0 Mac:52:54:00:41:07:8d Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:default-k8s-diff-port-766430 Clientid:01:52:54:00:41:07:8d}
	I0311 21:40:30.868013   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | domain default-k8s-diff-port-766430 has defined IP address 192.168.61.11 and MAC address 52:54:00:41:07:8d in network mk-default-k8s-diff-port-766430
	I0311 21:40:30.868140   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHPort
	I0311 21:40:30.868300   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHKeyPath
	I0311 21:40:30.868433   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .GetSSHUsername
	I0311 21:40:30.868558   70417 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/default-k8s-diff-port-766430/id_rsa Username:docker}
	I0311 21:40:31.037958   70417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 21:40:31.081173   70417 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103697   70417 node_ready.go:49] node "default-k8s-diff-port-766430" has status "Ready":"True"
	I0311 21:40:31.103717   70417 node_ready.go:38] duration metric: took 22.519334ms for node "default-k8s-diff-port-766430" to be "Ready" ...
	I0311 21:40:31.103726   70417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:31.129595   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:31.184749   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 21:40:31.184771   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0311 21:40:31.194340   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 21:40:31.213567   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 21:40:31.213589   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 21:40:31.255647   70417 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.255667   70417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 21:40:31.284917   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 21:40:31.309356   70417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 21:40:32.792293   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.597920266s)
	I0311 21:40:32.792337   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792351   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.792625   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.792686   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.792703   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.792714   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.792724   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.793060   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.793086   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:32.793137   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811230   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:32.811254   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:32.811583   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:32.811587   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:32.811606   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.156126   70417 pod_ready.go:92] pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.156148   70417 pod_ready.go:81] duration metric: took 2.026531002s for pod "coredns-5dd5756b68-kxjhf" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.156156   70417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174226   70417 pod_ready.go:92] pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.174248   70417 pod_ready.go:81] duration metric: took 18.0858ms for pod "coredns-5dd5756b68-qdcdw" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.174257   70417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186296   70417 pod_ready.go:92] pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.186329   70417 pod_ready.go:81] duration metric: took 12.06396ms for pod "etcd-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.186344   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195902   70417 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.195930   70417 pod_ready.go:81] duration metric: took 9.577334ms for pod "kube-apiserver-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.195945   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203134   70417 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.203160   70417 pod_ready.go:81] duration metric: took 7.205172ms for pod "kube-controller-manager-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.203174   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.449290   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.164324973s)
	I0311 21:40:33.449341   70417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.139948099s)
	I0311 21:40:33.449374   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449392   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449346   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449461   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449662   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449678   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449688   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449697   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449751   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.449795   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449810   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449823   70417 main.go:141] libmachine: Making call to close driver server
	I0311 21:40:33.449836   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) Calling .Close
	I0311 21:40:33.449886   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.449905   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.449926   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450213   70417 main.go:141] libmachine: (default-k8s-diff-port-766430) DBG | Closing plugin on server side
	I0311 21:40:33.450256   70417 main.go:141] libmachine: Successfully made call to close driver server
	I0311 21:40:33.450263   70417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0311 21:40:33.450272   70417 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-766430"
	I0311 21:40:33.453444   70417 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0311 21:40:33.454670   70417 addons.go:505] duration metric: took 2.661756652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0311 21:40:33.534893   70417 pod_ready.go:92] pod "kube-proxy-t4fwc" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.534915   70417 pod_ready.go:81] duration metric: took 331.733613ms for pod "kube-proxy-t4fwc" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.534924   70417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933950   70417 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace has status "Ready":"True"
	I0311 21:40:33.933973   70417 pod_ready.go:81] duration metric: took 399.042085ms for pod "kube-scheduler-default-k8s-diff-port-766430" in "kube-system" namespace to be "Ready" ...
	I0311 21:40:33.933981   70417 pod_ready.go:38] duration metric: took 2.830245804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 21:40:33.933994   70417 api_server.go:52] waiting for apiserver process to appear ...
	I0311 21:40:33.934053   70417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 21:40:33.953607   70417 api_server.go:72] duration metric: took 3.160728268s to wait for apiserver process to appear ...
	I0311 21:40:33.953629   70417 api_server.go:88] waiting for apiserver healthz status ...
	I0311 21:40:33.953650   70417 api_server.go:253] Checking apiserver healthz at https://192.168.61.11:8444/healthz ...
	I0311 21:40:33.959064   70417 api_server.go:279] https://192.168.61.11:8444/healthz returned 200:
	ok
	I0311 21:40:33.960101   70417 api_server.go:141] control plane version: v1.28.4
	I0311 21:40:33.960125   70417 api_server.go:131] duration metric: took 6.489682ms to wait for apiserver health ...
	I0311 21:40:33.960135   70417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 21:40:34.137026   70417 system_pods.go:59] 9 kube-system pods found
	I0311 21:40:34.137061   70417 system_pods.go:61] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.137079   70417 system_pods.go:61] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.137086   70417 system_pods.go:61] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.137093   70417 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.137100   70417 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.137105   70417 system_pods.go:61] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.137111   70417 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.137121   70417 system_pods.go:61] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.137133   70417 system_pods.go:61] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.137147   70417 system_pods.go:74] duration metric: took 177.004603ms to wait for pod list to return data ...
	I0311 21:40:34.137201   70417 default_sa.go:34] waiting for default service account to be created ...
	I0311 21:40:34.333563   70417 default_sa.go:45] found service account: "default"
	I0311 21:40:34.333589   70417 default_sa.go:55] duration metric: took 196.374123ms for default service account to be created ...
	I0311 21:40:34.333600   70417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 21:40:34.537376   70417 system_pods.go:86] 9 kube-system pods found
	I0311 21:40:34.537401   70417 system_pods.go:89] "coredns-5dd5756b68-kxjhf" [09678270-80f4-4bde-8080-3a3a41ecb356] Running
	I0311 21:40:34.537406   70417 system_pods.go:89] "coredns-5dd5756b68-qdcdw" [9f100559-2b0a-4068-a3e7-475b5865a1d9] Running
	I0311 21:40:34.537411   70417 system_pods.go:89] "etcd-default-k8s-diff-port-766430" [c09576c7-db47-4ce1-a8cb-d67926c413fe] Running
	I0311 21:40:34.537415   70417 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-766430" [f74a16b9-5e73-450f-bc62-c2e501a15ae2] Running
	I0311 21:40:34.537420   70417 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-766430" [abf4c5ea-4770-49a5-8480-dc9276663588] Running
	I0311 21:40:34.537423   70417 system_pods.go:89] "kube-proxy-t4fwc" [2b82ae7c-bffe-4fe4-b38c-3a789654df85] Running
	I0311 21:40:34.537427   70417 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-766430" [b1a26b37-7480-4f5c-bd99-785facd8b315] Running
	I0311 21:40:34.537433   70417 system_pods.go:89] "metrics-server-57f55c9bc5-9slpq" [ac6d8f9f-7bb4-4a50-8fd9-ca5e5dc0fc18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 21:40:34.537438   70417 system_pods.go:89] "storage-provisioner" [d1d4992a-803a-4064-b372-6ba9729bd2ef] Running
	I0311 21:40:34.537447   70417 system_pods.go:126] duration metric: took 203.840784ms to wait for k8s-apps to be running ...
	I0311 21:40:34.537453   70417 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 21:40:34.537493   70417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:34.555483   70417 system_svc.go:56] duration metric: took 18.021595ms WaitForService to wait for kubelet
	I0311 21:40:34.555511   70417 kubeadm.go:576] duration metric: took 3.76263503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 21:40:34.555534   70417 node_conditions.go:102] verifying NodePressure condition ...
	I0311 21:40:34.735214   70417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 21:40:34.735238   70417 node_conditions.go:123] node cpu capacity is 2
	I0311 21:40:34.735248   70417 node_conditions.go:105] duration metric: took 179.707447ms to run NodePressure ...
	I0311 21:40:34.735258   70417 start.go:240] waiting for startup goroutines ...
	I0311 21:40:34.735264   70417 start.go:245] waiting for cluster config update ...
	I0311 21:40:34.735274   70417 start.go:254] writing updated cluster config ...
	I0311 21:40:34.735539   70417 ssh_runner.go:195] Run: rm -f paused
	I0311 21:40:34.782710   70417 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 21:40:34.784627   70417 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-766430" cluster and "default" namespace by default
	I0311 21:40:56.380462   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:40:56.380539   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:40:56.382217   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:56.382264   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:56.382349   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:56.382450   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:56.382619   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:56.382712   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:56.384498   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:56.384579   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:56.384636   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:56.384766   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:56.384863   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:56.384967   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:56.385037   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:56.385139   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:56.385208   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:56.385281   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:56.385357   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:56.385408   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:56.385492   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:56.385567   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:56.385644   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:56.385769   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:56.385855   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:56.385962   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:56.386053   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:56.386104   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:56.386184   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:56.387594   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:56.387671   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:56.387738   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:56.387811   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:56.387914   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:56.388107   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:40:56.388182   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:40:56.388297   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388522   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388614   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.388844   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.388914   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389074   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389131   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389314   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389405   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:40:56.389594   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:40:56.389603   70908 kubeadm.go:309] 
	I0311 21:40:56.389653   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:40:56.389720   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:40:56.389732   70908 kubeadm.go:309] 
	I0311 21:40:56.389779   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:40:56.389811   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:40:56.389924   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:40:56.389933   70908 kubeadm.go:309] 
	I0311 21:40:56.390058   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:40:56.390109   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:40:56.390150   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:40:56.390159   70908 kubeadm.go:309] 
	I0311 21:40:56.390299   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:40:56.390395   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:40:56.390409   70908 kubeadm.go:309] 
	I0311 21:40:56.390512   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:40:56.390603   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:40:56.390702   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:40:56.390803   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:40:56.390833   70908 kubeadm.go:309] 
	W0311 21:40:56.390936   70908 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0311 21:40:56.390995   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0311 21:40:56.941058   70908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 21:40:56.958276   70908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 21:40:56.970464   70908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 21:40:56.970493   70908 kubeadm.go:156] found existing configuration files:
	
	I0311 21:40:56.970552   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 21:40:56.983314   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 21:40:56.983372   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 21:40:56.993791   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 21:40:57.004040   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 21:40:57.004098   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 21:40:57.014471   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.024751   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 21:40:57.024805   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 21:40:57.035389   70908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 21:40:57.045511   70908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 21:40:57.045556   70908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 21:40:57.056774   70908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 21:40:57.140620   70908 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0311 21:40:57.140789   70908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 21:40:57.310076   70908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 21:40:57.310193   70908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 21:40:57.310280   70908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 21:40:57.506834   70908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 21:40:57.509261   70908 out.go:204]   - Generating certificates and keys ...
	I0311 21:40:57.509362   70908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 21:40:57.509446   70908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 21:40:57.509576   70908 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 21:40:57.509669   70908 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 21:40:57.509765   70908 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 21:40:57.509839   70908 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 21:40:57.509949   70908 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 21:40:57.510004   70908 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 21:40:57.510109   70908 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 21:40:57.510231   70908 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 21:40:57.510274   70908 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 21:40:57.510361   70908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 21:40:57.585562   70908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 21:40:57.644460   70908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 21:40:57.784382   70908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 21:40:57.848952   70908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 21:40:57.867302   70908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 21:40:57.867791   70908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 21:40:57.867864   70908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 21:40:58.036523   70908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 21:40:58.039051   70908 out.go:204]   - Booting up control plane ...
	I0311 21:40:58.039176   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 21:40:58.054234   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 21:40:58.055548   70908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 21:40:58.057378   70908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 21:40:58.060167   70908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 21:41:38.062360   70908 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0311 21:41:38.062886   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:38.063137   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:43.063592   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:43.063788   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:41:53.064505   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:41:53.064773   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:13.065744   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:13.065995   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.066718   70908 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0311 21:42:53.067030   70908 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0311 21:42:53.067070   70908 kubeadm.go:309] 
	I0311 21:42:53.067135   70908 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0311 21:42:53.067191   70908 kubeadm.go:309] 		timed out waiting for the condition
	I0311 21:42:53.067203   70908 kubeadm.go:309] 
	I0311 21:42:53.067259   70908 kubeadm.go:309] 	This error is likely caused by:
	I0311 21:42:53.067318   70908 kubeadm.go:309] 		- The kubelet is not running
	I0311 21:42:53.067456   70908 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0311 21:42:53.067466   70908 kubeadm.go:309] 
	I0311 21:42:53.067590   70908 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0311 21:42:53.067650   70908 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0311 21:42:53.067724   70908 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0311 21:42:53.067735   70908 kubeadm.go:309] 
	I0311 21:42:53.067889   70908 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0311 21:42:53.068021   70908 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0311 21:42:53.068036   70908 kubeadm.go:309] 
	I0311 21:42:53.068169   70908 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0311 21:42:53.068297   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0311 21:42:53.068412   70908 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0311 21:42:53.068512   70908 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0311 21:42:53.068523   70908 kubeadm.go:309] 
	I0311 21:42:53.069455   70908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 21:42:53.069572   70908 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0311 21:42:53.069682   70908 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0311 21:42:53.069775   70908 kubeadm.go:393] duration metric: took 7m58.960224884s to StartCluster
	I0311 21:42:53.069833   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0311 21:42:53.069899   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0311 21:42:53.120459   70908 cri.go:89] found id: ""
	I0311 21:42:53.120486   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.120497   70908 logs.go:278] No container was found matching "kube-apiserver"
	I0311 21:42:53.120505   70908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0311 21:42:53.120564   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0311 21:42:53.159639   70908 cri.go:89] found id: ""
	I0311 21:42:53.159667   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.159676   70908 logs.go:278] No container was found matching "etcd"
	I0311 21:42:53.159682   70908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0311 21:42:53.159738   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0311 21:42:53.199584   70908 cri.go:89] found id: ""
	I0311 21:42:53.199607   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.199614   70908 logs.go:278] No container was found matching "coredns"
	I0311 21:42:53.199619   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0311 21:42:53.199676   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0311 21:42:53.238868   70908 cri.go:89] found id: ""
	I0311 21:42:53.238901   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.238908   70908 logs.go:278] No container was found matching "kube-scheduler"
	I0311 21:42:53.238917   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0311 21:42:53.238963   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0311 21:42:53.282172   70908 cri.go:89] found id: ""
	I0311 21:42:53.282205   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.282216   70908 logs.go:278] No container was found matching "kube-proxy"
	I0311 21:42:53.282225   70908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0311 21:42:53.282278   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0311 21:42:53.318450   70908 cri.go:89] found id: ""
	I0311 21:42:53.318481   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.318491   70908 logs.go:278] No container was found matching "kube-controller-manager"
	I0311 21:42:53.318499   70908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0311 21:42:53.318559   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0311 21:42:53.360887   70908 cri.go:89] found id: ""
	I0311 21:42:53.360913   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.360923   70908 logs.go:278] No container was found matching "kindnet"
	I0311 21:42:53.360930   70908 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0311 21:42:53.361027   70908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0311 21:42:53.414181   70908 cri.go:89] found id: ""
	I0311 21:42:53.414209   70908 logs.go:276] 0 containers: []
	W0311 21:42:53.414220   70908 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0311 21:42:53.414232   70908 logs.go:123] Gathering logs for kubelet ...
	I0311 21:42:53.414247   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 21:42:53.478658   70908 logs.go:123] Gathering logs for dmesg ...
	I0311 21:42:53.478689   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 21:42:53.494577   70908 logs.go:123] Gathering logs for describe nodes ...
	I0311 21:42:53.494604   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0311 21:42:53.586460   70908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0311 21:42:53.586483   70908 logs.go:123] Gathering logs for CRI-O ...
	I0311 21:42:53.586500   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0311 21:42:53.697218   70908 logs.go:123] Gathering logs for container status ...
	I0311 21:42:53.697251   70908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0311 21:42:53.746291   70908 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0311 21:42:53.746336   70908 out.go:239] * 
	W0311 21:42:53.746388   70908 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.746409   70908 out.go:239] * 
	W0311 21:42:53.747362   70908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 21:42:53.750888   70908 out.go:177] 
	W0311 21:42:53.752146   70908 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0311 21:42:53.752211   70908 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0311 21:42:53.752239   70908 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0311 21:42:53.753832   70908 out.go:177] 
	
	
	==> CRI-O <==
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.007552525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194007007533091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb0571cc-9f47-4bcb-94b9-65c0d1bf54b9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.008130572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a889be1f-3b6b-4ac5-975b-21a2d04241cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.008216703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a889be1f-3b6b-4ac5-975b-21a2d04241cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.008252742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a889be1f-3b6b-4ac5-975b-21a2d04241cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.048323374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc1714f6-da72-46ce-82d2-4871e436a927 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.048427722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc1714f6-da72-46ce-82d2-4871e436a927 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.050241818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94c0f3d8-a05d-4b88-aeb7-a2da5a4a5a28 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.050768041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194007050647716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94c0f3d8-a05d-4b88-aeb7-a2da5a4a5a28 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.051361468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcf0fa46-9578-4fe7-ad90-11fe3786a519 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.051450423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcf0fa46-9578-4fe7-ad90-11fe3786a519 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.051492302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dcf0fa46-9578-4fe7-ad90-11fe3786a519 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.086795462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ccc1e55b-4f12-4631-849a-09b787fa0d38 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.086891143Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ccc1e55b-4f12-4631-849a-09b787fa0d38 name=/runtime.v1.RuntimeService/Version
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.088132983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cab1a4bb-c178-4a1e-b104-ecf3083e901e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.088522462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194007088495545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cab1a4bb-c178-4a1e-b104-ecf3083e901e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.089194267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=081d389f-a8d5-4383-8571-e2f78353c7db name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.089242487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=081d389f-a8d5-4383-8571-e2f78353c7db name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.089272635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=081d389f-a8d5-4383-8571-e2f78353c7db name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.126303080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f26da11-139d-4ff3-957d-f6915c97803f name=/runtime.v1.RuntimeService/Version
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.126384796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f26da11-139d-4ff3-957d-f6915c97803f name=/runtime.v1.RuntimeService/Version
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.128224919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06816b96-b807-4e26-9e53-bc4c3354556e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.128667853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710194007128645612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06816b96-b807-4e26-9e53-bc4c3354556e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.129425255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80f0c784-4905-4fbb-81e3-6a7ef131ed91 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.129478984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80f0c784-4905-4fbb-81e3-6a7ef131ed91 name=/runtime.v1.RuntimeService/ListContainers
	Mar 11 21:53:27 old-k8s-version-239315 crio[648]: time="2024-03-11 21:53:27.129509483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=80f0c784-4905-4fbb-81e3-6a7ef131ed91 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar11 21:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053511] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047458] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.912778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.895538] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.801193] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.918843] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.060085] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078339] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.210226] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.161588] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.299563] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.096564] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.072356] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.134589] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Mar11 21:35] kauditd_printk_skb: 46 callbacks suppressed
	[Mar11 21:39] systemd-fstab-generator[4995]: Ignoring "noauto" option for root device
	[Mar11 21:40] systemd-fstab-generator[5275]: Ignoring "noauto" option for root device
	[  +0.073343] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:53:27 up 18 min,  0 users,  load average: 0.04, 0.04, 0.05
	Linux old-k8s-version-239315 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0005fc960, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc00084d1d0, 0x24, 0x0, ...)
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]: net.(*Dialer).DialContext(0xc000b9d740, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00084d1d0, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bacce0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00084d1d0, 0x24, 0x60, 0x7f438c57ed10, 0x118, ...)
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]: net/http.(*Transport).dial(0xc00067d2c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00084d1d0, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]: net/http.(*Transport).dialConn(0xc00067d2c0, 0x4f7fe00, 0xc000052030, 0x0, 0xc000b283c0, 0x5, 0xc00084d1d0, 0x24, 0x0, 0xc000840120, ...)
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]: net/http.(*Transport).dialConnFor(0xc00067d2c0, 0xc000ac71e0)
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]: created by net/http.(*Transport).queueForDial
	Mar 11 21:53:24 old-k8s-version-239315 kubelet[6709]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 11 21:53:24 old-k8s-version-239315 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 11 21:53:24 old-k8s-version-239315 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 11 21:53:25 old-k8s-version-239315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 129.
	Mar 11 21:53:25 old-k8s-version-239315 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 11 21:53:25 old-k8s-version-239315 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 11 21:53:25 old-k8s-version-239315 kubelet[6719]: I0311 21:53:25.361641    6719 server.go:416] Version: v1.20.0
	Mar 11 21:53:25 old-k8s-version-239315 kubelet[6719]: I0311 21:53:25.362034    6719 server.go:837] Client rotation is on, will bootstrap in background
	Mar 11 21:53:25 old-k8s-version-239315 kubelet[6719]: I0311 21:53:25.363988    6719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 11 21:53:25 old-k8s-version-239315 kubelet[6719]: I0311 21:53:25.365385    6719 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 11 21:53:25 old-k8s-version-239315 kubelet[6719]: W0311 21:53:25.365549    6719 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 2 (278.305997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-239315" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (88.13s)

                                                
                                    

Test pass (249/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.44
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.28.4/json-events 4.7
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.29.0-rc.2/json-events 4.56
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 124.5
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 146.36
38 TestAddons/parallel/Registry 14.13
40 TestAddons/parallel/InspektorGadget 12.4
41 TestAddons/parallel/MetricsServer 7.09
42 TestAddons/parallel/HelmTiller 10.69
44 TestAddons/parallel/CSI 61.63
45 TestAddons/parallel/Headlamp 13.64
46 TestAddons/parallel/CloudSpanner 5.59
47 TestAddons/parallel/LocalPath 52.35
48 TestAddons/parallel/NvidiaDevicePlugin 6.55
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestCertOptions 48.24
55 TestCertExpiration 287.03
57 TestForceSystemdFlag 98.78
58 TestForceSystemdEnv 49.33
60 TestKVMDriverInstallOrUpdate 1.2
64 TestErrorSpam/setup 46.33
65 TestErrorSpam/start 0.37
66 TestErrorSpam/status 0.74
67 TestErrorSpam/pause 1.62
68 TestErrorSpam/unpause 1.84
69 TestErrorSpam/stop 5.27
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 59.91
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.59
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
81 TestFunctional/serial/CacheCmd/cache/add_local 1.07
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 39.79
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.66
92 TestFunctional/serial/LogsFileCmd 1.59
93 TestFunctional/serial/InvalidService 4.37
95 TestFunctional/parallel/ConfigCmd 0.41
96 TestFunctional/parallel/DashboardCmd 12
97 TestFunctional/parallel/DryRun 0.48
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 1.41
103 TestFunctional/parallel/ServiceCmdConnect 10.68
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 40.8
107 TestFunctional/parallel/SSHCmd 0.53
108 TestFunctional/parallel/CpCmd 1.54
109 TestFunctional/parallel/MySQL 29.14
110 TestFunctional/parallel/FileSync 0.27
111 TestFunctional/parallel/CertSync 1.59
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
119 TestFunctional/parallel/License 0.2
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
123 TestFunctional/parallel/Version/short 0.06
124 TestFunctional/parallel/Version/components 1.27
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.8
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.66
130 TestFunctional/parallel/ImageCommands/Setup 1
131 TestFunctional/parallel/ServiceCmd/DeployApp 11.27
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.36
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.68
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.52
144 TestFunctional/parallel/ServiceCmd/List 0.38
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
147 TestFunctional/parallel/ServiceCmd/Format 0.42
148 TestFunctional/parallel/ServiceCmd/URL 0.5
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
150 TestFunctional/parallel/MountCmd/any-port 23.76
151 TestFunctional/parallel/ProfileCmd/profile_list 0.39
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.74
154 TestFunctional/parallel/ImageCommands/ImageRemove 1
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.66
157 TestFunctional/parallel/MountCmd/specific-port 2.13
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMutliControlPlane/serial/StartCluster 243.19
166 TestMutliControlPlane/serial/DeployApp 5.42
167 TestMutliControlPlane/serial/PingHostFromPods 1.39
168 TestMutliControlPlane/serial/AddWorkerNode 48.85
169 TestMutliControlPlane/serial/NodeLabels 0.07
170 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.56
171 TestMutliControlPlane/serial/CopyFile 13.48
173 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
175 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
177 TestMutliControlPlane/serial/DeleteSecondaryNode 17.42
178 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
180 TestMutliControlPlane/serial/RestartCluster 317.26
181 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.39
182 TestMutliControlPlane/serial/AddSecondaryNode 79.01
183 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
187 TestJSONOutput/start/Command 101.97
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.76
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.66
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.36
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.21
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 92.59
219 TestMountStart/serial/StartWithMountFirst 28.31
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 26.5
222 TestMountStart/serial/VerifyMountSecond 0.39
223 TestMountStart/serial/DeleteFirst 0.89
224 TestMountStart/serial/VerifyMountPostDelete 0.39
225 TestMountStart/serial/Stop 1.42
226 TestMountStart/serial/RestartStopped 23.14
227 TestMountStart/serial/VerifyMountPostStop 0.4
230 TestMultiNode/serial/FreshStart2Nodes 102.19
231 TestMultiNode/serial/DeployApp2Nodes 4.18
232 TestMultiNode/serial/PingHostFrom2Pods 0.88
233 TestMultiNode/serial/AddNode 41.04
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.46
237 TestMultiNode/serial/StopNode 3.17
238 TestMultiNode/serial/StartAfterStop 28.95
240 TestMultiNode/serial/DeleteNode 2.54
242 TestMultiNode/serial/RestartMultiNode 170.04
243 TestMultiNode/serial/ValidateNameConflict 47.45
250 TestScheduledStopUnix 117.04
254 TestRunningBinaryUpgrade 149.8
265 TestNetworkPlugins/group/false 3.2
276 TestStoppedBinaryUpgrade/Setup 0.5
277 TestStoppedBinaryUpgrade/Upgrade 193.54
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
280 TestPause/serial/Start 83.79
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
283 TestNoKubernetes/serial/StartWithK8s 73.54
285 TestNoKubernetes/serial/StartWithStopK8s 6.75
286 TestNoKubernetes/serial/Start 27.26
287 TestNetworkPlugins/group/auto/Start 119.71
288 TestNetworkPlugins/group/kindnet/Start 98.33
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
290 TestNoKubernetes/serial/ProfileList 3.03
291 TestNoKubernetes/serial/Stop 2.58
292 TestNoKubernetes/serial/StartNoArgs 71.5
293 TestNetworkPlugins/group/calico/Start 161.97
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
295 TestNetworkPlugins/group/custom-flannel/Start 107.54
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/auto/KubeletFlags 0.21
298 TestNetworkPlugins/group/auto/NetCatPod 12.24
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
300 TestNetworkPlugins/group/kindnet/NetCatPod 13.34
301 TestNetworkPlugins/group/auto/DNS 0.22
302 TestNetworkPlugins/group/auto/Localhost 0.18
303 TestNetworkPlugins/group/auto/HairPin 0.16
304 TestNetworkPlugins/group/kindnet/DNS 0.22
305 TestNetworkPlugins/group/kindnet/Localhost 0.16
306 TestNetworkPlugins/group/kindnet/HairPin 0.16
307 TestNetworkPlugins/group/enable-default-cni/Start 112.07
308 TestNetworkPlugins/group/flannel/Start 109.83
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.23
311 TestNetworkPlugins/group/calico/NetCatPod 11.29
312 TestNetworkPlugins/group/calico/DNS 0.19
313 TestNetworkPlugins/group/calico/Localhost 0.16
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
315 TestNetworkPlugins/group/calico/HairPin 0.16
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
317 TestNetworkPlugins/group/custom-flannel/DNS 0.34
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
320 TestNetworkPlugins/group/bridge/Start 97.1
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.52
325 TestNetworkPlugins/group/flannel/ControllerPod 6.01
326 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
327 TestNetworkPlugins/group/flannel/NetCatPod 11.26
328 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
331 TestNetworkPlugins/group/flannel/DNS 0.2
332 TestNetworkPlugins/group/flannel/Localhost 0.19
333 TestNetworkPlugins/group/flannel/HairPin 0.18
335 TestStartStop/group/no-preload/serial/FirstStart 122.1
337 TestStartStop/group/embed-certs/serial/FirstStart 129.27
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
339 TestNetworkPlugins/group/bridge/NetCatPod 13.24
340 TestNetworkPlugins/group/bridge/DNS 0.18
341 TestNetworkPlugins/group/bridge/Localhost 0.13
342 TestNetworkPlugins/group/bridge/HairPin 0.17
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.19
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
346 TestStartStop/group/no-preload/serial/DeployApp 9.32
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
351 TestStartStop/group/embed-certs/serial/DeployApp 8.29
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
358 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 690.74
359 TestStartStop/group/no-preload/serial/SecondStart 591.2
361 TestStartStop/group/embed-certs/serial/SecondStart 634.05
362 TestStartStop/group/old-k8s-version/serial/Stop 3.3
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
374 TestStartStop/group/newest-cni/serial/FirstStart 58.29
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
377 TestStartStop/group/newest-cni/serial/Stop 10.66
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
379 TestStartStop/group/newest-cni/serial/SecondStart 38.38
380 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/newest-cni/serial/Pause 2.65
x
+
TestDownloadOnly/v1.20.0/json-events (9.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-462238 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-462238 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.434934946s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-462238
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-462238: exit status 85 (72.37572ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-462238 | jenkins | v1.32.0 | 11 Mar 24 20:09 UTC |          |
	|         | -p download-only-462238        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:09:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:09:51.924031   18247 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:09:51.924250   18247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:09:51.924258   18247 out.go:304] Setting ErrFile to fd 2...
	I0311 20:09:51.924262   18247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:09:51.924437   18247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	W0311 20:09:51.924545   18247 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18358-11004/.minikube/config/config.json: open /home/jenkins/minikube-integration/18358-11004/.minikube/config/config.json: no such file or directory
	I0311 20:09:51.925127   18247 out.go:298] Setting JSON to true
	I0311 20:09:51.925985   18247 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3141,"bootTime":1710184651,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:09:51.926046   18247 start.go:139] virtualization: kvm guest
	I0311 20:09:51.928634   18247 out.go:97] [download-only-462238] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:09:51.930058   18247 out.go:169] MINIKUBE_LOCATION=18358
	W0311 20:09:51.928782   18247 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball: no such file or directory
	I0311 20:09:51.928827   18247 notify.go:220] Checking for updates...
	I0311 20:09:51.932722   18247 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:09:51.934071   18247 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:09:51.935311   18247 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:09:51.936643   18247 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0311 20:09:51.939150   18247 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 20:09:51.939335   18247 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:09:52.035934   18247 out.go:97] Using the kvm2 driver based on user configuration
	I0311 20:09:52.035966   18247 start.go:297] selected driver: kvm2
	I0311 20:09:52.035978   18247 start.go:901] validating driver "kvm2" against <nil>
	I0311 20:09:52.036307   18247 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:09:52.036438   18247 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18358-11004/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0311 20:09:52.050723   18247 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0311 20:09:52.050775   18247 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 20:09:52.051237   18247 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0311 20:09:52.051388   18247 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 20:09:52.051418   18247 cni.go:84] Creating CNI manager for ""
	I0311 20:09:52.051428   18247 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0311 20:09:52.051436   18247 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 20:09:52.051489   18247 start.go:340] cluster config:
	{Name:download-only-462238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-462238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:09:52.051663   18247 iso.go:125] acquiring lock: {Name:mk01c594acb315ed9710288d0fe2c40356bbd08e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 20:09:52.053648   18247 out.go:97] Downloading VM boot image ...
	I0311 20:09:52.053677   18247 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18358-11004/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0311 20:09:54.697048   18247 out.go:97] Starting "download-only-462238" primary control-plane node in "download-only-462238" cluster
	I0311 20:09:54.697088   18247 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 20:09:54.722485   18247 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0311 20:09:54.722506   18247 cache.go:56] Caching tarball of preloaded images
	I0311 20:09:54.722660   18247 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0311 20:09:54.724404   18247 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0311 20:09:54.724418   18247 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0311 20:09:54.747482   18247 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18358-11004/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-462238 host does not exist
	  To start a cluster, run: "minikube start -p download-only-462238"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-462238
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-924667 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-924667 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.698030318s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-924667
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-924667: exit status 85 (67.675545ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-462238 | jenkins | v1.32.0 | 11 Mar 24 20:09 UTC |                     |
	|         | -p download-only-462238        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| delete  | -p download-only-462238        | download-only-462238 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| start   | -o=json --download-only        | download-only-924667 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC |                     |
	|         | -p download-only-924667        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:10:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:10:01.697937   18418 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:10:01.698176   18418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:10:01.698186   18418 out.go:304] Setting ErrFile to fd 2...
	I0311 20:10:01.698190   18418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:10:01.698419   18418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:10:01.698994   18418 out.go:298] Setting JSON to true
	I0311 20:10:01.699817   18418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3151,"bootTime":1710184651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:10:01.699873   18418 start.go:139] virtualization: kvm guest
	I0311 20:10:01.702137   18418 out.go:97] [download-only-924667] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:10:01.703869   18418 out.go:169] MINIKUBE_LOCATION=18358
	I0311 20:10:01.702288   18418 notify.go:220] Checking for updates...
	I0311 20:10:01.706898   18418 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:10:01.708334   18418 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:10:01.709682   18418 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:10:01.711097   18418 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-924667 host does not exist
	  To start a cluster, run: "minikube start -p download-only-924667"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-924667
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-991647 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-991647 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.564179657s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-991647
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-991647: exit status 85 (68.750114ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-462238 | jenkins | v1.32.0 | 11 Mar 24 20:09 UTC |                     |
	|         | -p download-only-462238           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| delete  | -p download-only-462238           | download-only-462238 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| start   | -o=json --download-only           | download-only-924667 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC |                     |
	|         | -p download-only-924667           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| delete  | -p download-only-924667           | download-only-924667 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC | 11 Mar 24 20:10 UTC |
	| start   | -o=json --download-only           | download-only-991647 | jenkins | v1.32.0 | 11 Mar 24 20:10 UTC |                     |
	|         | -p download-only-991647           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 20:10:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 20:10:06.724573   18574 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:10:06.724841   18574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:10:06.724853   18574 out.go:304] Setting ErrFile to fd 2...
	I0311 20:10:06.724858   18574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:10:06.725012   18574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:10:06.725569   18574 out.go:298] Setting JSON to true
	I0311 20:10:06.726412   18574 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3156,"bootTime":1710184651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:10:06.726470   18574 start.go:139] virtualization: kvm guest
	I0311 20:10:06.728533   18574 out.go:97] [download-only-991647] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:10:06.730154   18574 out.go:169] MINIKUBE_LOCATION=18358
	I0311 20:10:06.728759   18574 notify.go:220] Checking for updates...
	I0311 20:10:06.733209   18574 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:10:06.734791   18574 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:10:06.736260   18574 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:10:06.737612   18574 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-991647 host does not exist
	  To start a cluster, run: "minikube start -p download-only-991647"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-991647
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-409325 --alsologtostderr --binary-mirror http://127.0.0.1:43807 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-409325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-409325
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (124.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-153995 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-153995 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m3.653992439s)
helpers_test.go:175: Cleaning up "offline-crio-153995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-153995
--- PASS: TestOffline (124.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-118179
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-118179: exit status 85 (59.410328ms)

                                                
                                                
-- stdout --
	* Profile "addons-118179" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-118179"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-118179
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-118179: exit status 85 (60.425381ms)

                                                
                                                
-- stdout --
	* Profile "addons-118179" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-118179"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (146.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-118179 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-118179 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.358760472s)
--- PASS: TestAddons/Setup (146.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 28.223284ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9xb76" [3903cf06-c0ac-4d15-a746-05339675f06d] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006007646s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6lhvc" [4094d1fb-0775-4dc1-b7b3-22fbe462ee70] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00634456s
addons_test.go:340: (dbg) Run:  kubectl --context addons-118179 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-118179 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-118179 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.897210476s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 ip
2024/03/11 20:12:52 [DEBUG] GET http://192.168.39.50:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.4s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tdk2d" [441c4adf-c363-4cc2-8675-4d129c2b6931] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005630499s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-118179
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-118179: (6.398377556s)
--- PASS: TestAddons/parallel/InspektorGadget (12.40s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 27.572709ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-rngft" [2972db78-e263-4e81-ae94-b595ca23332c] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007307451s
addons_test.go:415: (dbg) Run:  kubectl --context addons-118179 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.09s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.728876ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-zqbdm" [ade214cb-6f64-48e5-bcbb-916b4343fd3b] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005772382s
addons_test.go:473: (dbg) Run:  kubectl --context addons-118179 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-118179 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.002915222s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.598134ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-118179 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-118179 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6d5ff8b6-65b3-4726-b69d-2a39c6d6fe56] Pending
helpers_test.go:344: "task-pv-pod" [6d5ff8b6-65b3-4726-b69d-2a39c6d6fe56] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6d5ff8b6-65b3-4726-b69d-2a39c6d6fe56] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004826904s
addons_test.go:584: (dbg) Run:  kubectl --context addons-118179 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-118179 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-118179 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-118179 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-118179 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-118179 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-118179 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [973e453f-5137-458c-81a2-0d8630bb0942] Pending
helpers_test.go:344: "task-pv-pod-restore" [973e453f-5137-458c-81a2-0d8630bb0942] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [973e453f-5137-458c-81a2-0d8630bb0942] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004342413s
addons_test.go:626: (dbg) Run:  kubectl --context addons-118179 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-118179 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-118179 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-118179 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.813234734s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-118179 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-118179 --alsologtostderr -v=1: (1.631607715s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-9hb9p" [67df60ae-ad11-4767-b3eb-ccfceb9799a9] Pending
helpers_test.go:344: "headlamp-5485c556b-9hb9p" [67df60ae-ad11-4767-b3eb-ccfceb9799a9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-9hb9p" [67df60ae-ad11-4767-b3eb-ccfceb9799a9] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.008002071s
--- PASS: TestAddons/parallel/Headlamp (13.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-6nhdf" [1bfa237a-fe0d-459d-8a3d-198e73bc62c6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004542963s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-118179
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-118179 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-118179 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118179 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [91b1336f-5d6c-42f0-8231-ca84a07e7db4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [91b1336f-5d6c-42f0-8231-ca84a07e7db4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [91b1336f-5d6c-42f0-8231-ca84a07e7db4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004926423s
addons_test.go:891: (dbg) Run:  kubectl --context addons-118179 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 ssh "cat /opt/local-path-provisioner/pvc-3907be86-6656-46a8-8487-459ee24b4993_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-118179 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-118179 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-118179 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-118179 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.49456266s)
--- PASS: TestAddons/parallel/LocalPath (52.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fkxwj" [43b433a3-3b3c-4cf4-a6c9-f11d6986e1a2] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005117264s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-118179
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-ttwqt" [cb8f70d5-1cf3-44eb-89cf-5d530bebda0d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.010191397s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-118179 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-118179 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (48.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-406431 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0311 21:16:41.854004   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 21:16:58.808077   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-406431 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.729399369s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-406431 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-406431 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-406431 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-406431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-406431
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-406431: (1.015366598s)
--- PASS: TestCertOptions (48.24s)

                                                
                                    
x
+
TestCertExpiration (287.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-228186 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-228186 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (53.146360343s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-228186 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-228186 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (52.561419664s)
helpers_test.go:175: Cleaning up "cert-expiration-228186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-228186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-228186: (1.32588247s)
--- PASS: TestCertExpiration (287.03s)

                                                
                                    
x
+
TestForceSystemdFlag (98.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-193340 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-193340 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m37.567886535s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-193340 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-193340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-193340
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-193340: (1.009938548s)
--- PASS: TestForceSystemdFlag (98.78s)

                                                
                                    
x
+
TestForceSystemdEnv (49.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-922319 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-922319 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.511489752s)
helpers_test.go:175: Cleaning up "force-systemd-env-922319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-922319
--- PASS: TestForceSystemdEnv (49.33s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.20s)

                                                
                                    
x
+
TestErrorSpam/setup (46.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-602613 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-602613 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-602613 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-602613 --driver=kvm2  --container-runtime=crio: (46.331214933s)
--- PASS: TestErrorSpam/setup (46.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (5.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 stop: (2.300631868s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 stop: (1.649637876s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-602613 --log_dir /tmp/nospam-602613 stop: (1.323835011s)
--- PASS: TestErrorSpam/stop (5.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18358-11004/.minikube/files/etc/test/nested/copy/18235/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244607 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-244607 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.913147452s)
--- PASS: TestFunctional/serial/StartWithProxy (59.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244607 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-244607 --alsologtostderr -v=8: (37.592963649s)
functional_test.go:659: soft start took 37.593718051s for "functional-244607" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-244607 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 cache add registry.k8s.io/pause:3.1: (1.112921805s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 cache add registry.k8s.io/pause:3.3: (1.265081534s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 cache add registry.k8s.io/pause:latest: (1.070040624s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-244607 /tmp/TestFunctionalserialCacheCmdcacheadd_local2929923851/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cache add minikube-local-cache-test:functional-244607
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cache delete minikube-local-cache-test:functional-244607
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-244607
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (226.694289ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 kubectl -- --context functional-244607 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-244607 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244607 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-244607 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.784614296s)
functional_test.go:757: restart took 39.784723525s for "functional-244607" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-244607 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 logs: (1.655551822s)
--- PASS: TestFunctional/serial/LogsCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 logs --file /tmp/TestFunctionalserialLogsFileCmd1620915943/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 logs --file /tmp/TestFunctionalserialLogsFileCmd1620915943/001/logs.txt: (1.591644304s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-244607 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-244607
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-244607: exit status 115 (280.792563ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.51:30905 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-244607 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 config get cpus: exit status 14 (57.512793ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 config get cpus: exit status 14 (60.60029ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-244607 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-244607 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26581: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.00s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244607 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-244607 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.862152ms)

                                                
                                                
-- stdout --
	* [functional-244607] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:22:31.442918   26453 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:22:31.443048   26453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:22:31.443054   26453 out.go:304] Setting ErrFile to fd 2...
	I0311 20:22:31.443060   26453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:22:31.443487   26453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:22:31.444606   26453 out.go:298] Setting JSON to false
	I0311 20:22:31.445870   26453 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3900,"bootTime":1710184651,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:22:31.445955   26453 start.go:139] virtualization: kvm guest
	I0311 20:22:31.448309   26453 out.go:177] * [functional-244607] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 20:22:31.449844   26453 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:22:31.449870   26453 notify.go:220] Checking for updates...
	I0311 20:22:31.451446   26453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:22:31.453069   26453 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:22:31.454276   26453 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:22:31.455626   26453 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:22:31.457401   26453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:22:31.459012   26453 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:22:31.459376   26453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:22:31.459431   26453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:22:31.473961   26453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0311 20:22:31.474428   26453 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:22:31.474996   26453 main.go:141] libmachine: Using API Version  1
	I0311 20:22:31.475014   26453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:22:31.475413   26453 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:22:31.475622   26453 main.go:141] libmachine: (functional-244607) Calling .DriverName
	I0311 20:22:31.475846   26453 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:22:31.476157   26453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:22:31.476196   26453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:22:31.489868   26453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I0311 20:22:31.490194   26453 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:22:31.490618   26453 main.go:141] libmachine: Using API Version  1
	I0311 20:22:31.490646   26453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:22:31.490938   26453 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:22:31.491093   26453 main.go:141] libmachine: (functional-244607) Calling .DriverName
	I0311 20:22:31.521538   26453 out.go:177] * Using the kvm2 driver based on existing profile
	I0311 20:22:31.522676   26453 start.go:297] selected driver: kvm2
	I0311 20:22:31.522702   26453 start.go:901] validating driver "kvm2" against &{Name:functional-244607 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-244607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:22:31.522849   26453 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:22:31.525286   26453 out.go:177] 
	W0311 20:22:31.526524   26453 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0311 20:22:31.527811   26453 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244607 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244607 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-244607 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (149.843818ms)

                                                
                                                
-- stdout --
	* [functional-244607] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:22:31.919670   26507 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:22:31.919810   26507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:22:31.919822   26507 out.go:304] Setting ErrFile to fd 2...
	I0311 20:22:31.919829   26507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:22:31.920211   26507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:22:31.920854   26507 out.go:298] Setting JSON to false
	I0311 20:22:31.922028   26507 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3901,"bootTime":1710184651,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 20:22:31.922123   26507 start.go:139] virtualization: kvm guest
	I0311 20:22:31.923917   26507 out.go:177] * [functional-244607] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0311 20:22:31.925573   26507 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 20:22:31.925644   26507 notify.go:220] Checking for updates...
	I0311 20:22:31.926840   26507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 20:22:31.928101   26507 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 20:22:31.929295   26507 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 20:22:31.930871   26507 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 20:22:31.932458   26507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 20:22:31.934410   26507 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:22:31.934779   26507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:22:31.934844   26507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:22:31.949799   26507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0311 20:22:31.950232   26507 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:22:31.950876   26507 main.go:141] libmachine: Using API Version  1
	I0311 20:22:31.950891   26507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:22:31.951254   26507 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:22:31.951475   26507 main.go:141] libmachine: (functional-244607) Calling .DriverName
	I0311 20:22:31.951776   26507 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 20:22:31.952204   26507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:22:31.952253   26507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:22:31.966868   26507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43405
	I0311 20:22:31.967210   26507 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:22:31.967696   26507 main.go:141] libmachine: Using API Version  1
	I0311 20:22:31.967724   26507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:22:31.968049   26507 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:22:31.968279   26507 main.go:141] libmachine: (functional-244607) Calling .DriverName
	I0311 20:22:31.999926   26507 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0311 20:22:32.001350   26507 start.go:297] selected driver: kvm2
	I0311 20:22:32.001376   26507 start.go:901] validating driver "kvm2" against &{Name:functional-244607 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-244607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 20:22:32.001499   26507 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 20:22:32.004048   26507 out.go:177] 
	W0311 20:22:32.005468   26507 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0311 20:22:32.006834   26507 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-244607 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-244607 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-pp42r" [9942019c-f464-4a27-a0cb-137d4643fe2c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-pp42r" [9942019c-f464-4a27-a0cb-137d4643fe2c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004267752s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.51:31601
functional_test.go:1671: http://192.168.39.51:31601: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-pp42r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.51:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.51:31601
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d28c23cf-014e-46b0-b127-d624fca95147] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005979164s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-244607 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-244607 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-244607 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-244607 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-244607 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [af0dc43a-c5b3-4003-aaf0-97d5e125fab1] Pending
helpers_test.go:344: "sp-pod" [af0dc43a-c5b3-4003-aaf0-97d5e125fab1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [af0dc43a-c5b3-4003-aaf0-97d5e125fab1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004551444s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-244607 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-244607 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-244607 delete -f testdata/storage-provisioner/pod.yaml: (2.267962359s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-244607 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d984c592-e0b2-41df-b2c7-5133b7ff4edb] Pending
helpers_test.go:344: "sp-pod" [d984c592-e0b2-41df-b2c7-5133b7ff4edb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d984c592-e0b2-41df-b2c7-5133b7ff4edb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.034769877s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-244607 exec sp-pod -- ls /tmp/mount
E0311 20:22:40.215733   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh -n functional-244607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cp functional-244607:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4068720308/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh -n functional-244607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh -n functional-244607 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-244607 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-sttwv" [8eed4f9b-aec1-4f37-a7b4-f1279a52db3d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-sttwv" [8eed4f9b-aec1-4f37-a7b4-f1279a52db3d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.00734524s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-244607 exec mysql-859648c796-sttwv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-244607 exec mysql-859648c796-sttwv -- mysql -ppassword -e "show databases;": exit status 1 (297.416958ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-244607 exec mysql-859648c796-sttwv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-244607 exec mysql-859648c796-sttwv -- mysql -ppassword -e "show databases;": exit status 1 (208.904906ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-244607 exec mysql-859648c796-sttwv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-244607 exec mysql-859648c796-sttwv -- mysql -ppassword -e "show databases;": exit status 1 (307.929611ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-244607 exec mysql-859648c796-sttwv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18235/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo cat /etc/test/nested/copy/18235/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18235.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo cat /etc/ssl/certs/18235.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18235.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo cat /usr/share/ca-certificates/18235.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/182352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo cat /etc/ssl/certs/182352.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/182352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo cat /usr/share/ca-certificates/182352.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-244607 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 ssh "sudo systemctl is-active docker": exit status 1 (238.910267ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 ssh "sudo systemctl is-active containerd": exit status 1 (240.877975ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 version -o=json --components: (1.269830488s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls --format short --alsologtostderr
E0311 20:22:41.496706   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244607 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-244607
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-244607
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244607 image ls --format short --alsologtostderr:
I0311 20:22:41.337088   27242 out.go:291] Setting OutFile to fd 1 ...
I0311 20:22:41.337198   27242 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:41.337208   27242 out.go:304] Setting ErrFile to fd 2...
I0311 20:22:41.337213   27242 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:41.337400   27242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
I0311 20:22:41.337943   27242 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:41.338034   27242 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:41.338390   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:41.338430   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:41.352940   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41851
I0311 20:22:41.353428   27242 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:41.353950   27242 main.go:141] libmachine: Using API Version  1
I0311 20:22:41.353965   27242 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:41.354314   27242 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:41.354498   27242 main.go:141] libmachine: (functional-244607) Calling .GetState
I0311 20:22:41.356293   27242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:41.356331   27242 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:41.370333   27242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
I0311 20:22:41.370664   27242 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:41.371116   27242 main.go:141] libmachine: Using API Version  1
I0311 20:22:41.371139   27242 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:41.371454   27242 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:41.371661   27242 main.go:141] libmachine: (functional-244607) Calling .DriverName
I0311 20:22:41.371865   27242 ssh_runner.go:195] Run: systemctl --version
I0311 20:22:41.371893   27242 main.go:141] libmachine: (functional-244607) Calling .GetSSHHostname
I0311 20:22:41.374417   27242 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:41.374807   27242 main.go:141] libmachine: (functional-244607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:1f:af", ip: ""} in network mk-functional-244607: {Iface:virbr1 ExpiryTime:2024-03-11 21:19:41 +0000 UTC Type:0 Mac:52:54:00:a3:1f:af Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-244607 Clientid:01:52:54:00:a3:1f:af}
I0311 20:22:41.374837   27242 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined IP address 192.168.39.51 and MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:41.374923   27242 main.go:141] libmachine: (functional-244607) Calling .GetSSHPort
I0311 20:22:41.375056   27242 main.go:141] libmachine: (functional-244607) Calling .GetSSHKeyPath
I0311 20:22:41.375198   27242 main.go:141] libmachine: (functional-244607) Calling .GetSSHUsername
I0311 20:22:41.375321   27242 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/functional-244607/id_rsa Username:docker}
I0311 20:22:41.502928   27242 ssh_runner.go:195] Run: sudo crictl images --output json
I0311 20:22:42.075396   27242 main.go:141] libmachine: Making call to close driver server
I0311 20:22:42.075415   27242 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:42.075695   27242 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:42.075712   27242 main.go:141] libmachine: Making call to close connection to plugin binary
I0311 20:22:42.075742   27242 main.go:141] libmachine: Making call to close driver server
I0311 20:22:42.075752   27242 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:42.075966   27242 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:42.075978   27242 main.go:141] libmachine: Making call to close connection to plugin binary
I0311 20:22:42.075998   27242 main.go:141] libmachine: (functional-244607) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244607 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | e4720093a3c13 | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-244607  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-244607  | f37a8232405a7 | 3.35kB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244607 image ls --format table --alsologtostderr:
I0311 20:22:42.474864   27300 out.go:291] Setting OutFile to fd 1 ...
I0311 20:22:42.475198   27300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:42.475212   27300 out.go:304] Setting ErrFile to fd 2...
I0311 20:22:42.475219   27300 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:42.475495   27300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
I0311 20:22:42.476261   27300 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:42.476399   27300 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:42.476971   27300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:42.477023   27300 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:42.495235   27300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44119
I0311 20:22:42.495688   27300 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:42.496349   27300 main.go:141] libmachine: Using API Version  1
I0311 20:22:42.496385   27300 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:42.496733   27300 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:42.496966   27300 main.go:141] libmachine: (functional-244607) Calling .GetState
I0311 20:22:42.499667   27300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:42.499712   27300 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:42.517893   27300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
I0311 20:22:42.518413   27300 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:42.518928   27300 main.go:141] libmachine: Using API Version  1
I0311 20:22:42.518946   27300 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:42.519204   27300 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:42.519411   27300 main.go:141] libmachine: (functional-244607) Calling .DriverName
I0311 20:22:42.519604   27300 ssh_runner.go:195] Run: systemctl --version
I0311 20:22:42.519632   27300 main.go:141] libmachine: (functional-244607) Calling .GetSSHHostname
I0311 20:22:42.523002   27300 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:42.523415   27300 main.go:141] libmachine: (functional-244607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:1f:af", ip: ""} in network mk-functional-244607: {Iface:virbr1 ExpiryTime:2024-03-11 21:19:41 +0000 UTC Type:0 Mac:52:54:00:a3:1f:af Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-244607 Clientid:01:52:54:00:a3:1f:af}
I0311 20:22:42.523447   27300 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined IP address 192.168.39.51 and MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:42.523595   27300 main.go:141] libmachine: (functional-244607) Calling .GetSSHPort
I0311 20:22:42.523751   27300 main.go:141] libmachine: (functional-244607) Calling .GetSSHKeyPath
I0311 20:22:42.523897   27300 main.go:141] libmachine: (functional-244607) Calling .GetSSHUsername
I0311 20:22:42.524024   27300 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/functional-244607/id_rsa Username:docker}
I0311 20:22:42.628835   27300 ssh_runner.go:195] Run: sudo crictl images --output json
I0311 20:22:42.720161   27300 main.go:141] libmachine: Making call to close driver server
I0311 20:22:42.720179   27300 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:42.720444   27300 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:42.720456   27300 main.go:141] libmachine: (functional-244607) DBG | Closing plugin on server side
I0311 20:22:42.720459   27300 main.go:141] libmachine: Making call to close connection to plugin binary
I0311 20:22:42.720469   27300 main.go:141] libmachine: Making call to close driver server
I0311 20:22:42.720477   27300 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:42.720700   27300 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:42.720715   27300 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244607 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-244607"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql
@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"f37a8232405a7aeb00cbe168e9b5e8b293e41ba2baac3ab5b595a63cbd9e61de","repoDigests":["localhost/minikube-local-cache-test@sha256:a0c6ab6d897cacdd2a5f7f4584ab90f1b3c914d5305b912c3e8766885bc31df0"],"repoTags":["localhost/minikube-local-cache-test:functional-244607"],"size":"3345"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:399
3d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":["docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71","docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865895"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDige
sts":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry
.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"82e4c8a736a4f
cf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","
repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244607 image ls --format json --alsologtostderr:
I0311 20:22:42.141966   27266 out.go:291] Setting OutFile to fd 1 ...
I0311 20:22:42.142082   27266 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:42.142091   27266 out.go:304] Setting ErrFile to fd 2...
I0311 20:22:42.142097   27266 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:42.142294   27266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
I0311 20:22:42.142805   27266 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:42.142897   27266 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:42.143238   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:42.143295   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:42.158594   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
I0311 20:22:42.159032   27266 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:42.159648   27266 main.go:141] libmachine: Using API Version  1
I0311 20:22:42.159684   27266 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:42.160079   27266 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:42.160337   27266 main.go:141] libmachine: (functional-244607) Calling .GetState
I0311 20:22:42.162350   27266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:42.162390   27266 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:42.176220   27266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
I0311 20:22:42.176617   27266 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:42.177125   27266 main.go:141] libmachine: Using API Version  1
I0311 20:22:42.177147   27266 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:42.177602   27266 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:42.177838   27266 main.go:141] libmachine: (functional-244607) Calling .DriverName
I0311 20:22:42.178056   27266 ssh_runner.go:195] Run: systemctl --version
I0311 20:22:42.178090   27266 main.go:141] libmachine: (functional-244607) Calling .GetSSHHostname
I0311 20:22:42.180967   27266 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:42.181510   27266 main.go:141] libmachine: (functional-244607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:1f:af", ip: ""} in network mk-functional-244607: {Iface:virbr1 ExpiryTime:2024-03-11 21:19:41 +0000 UTC Type:0 Mac:52:54:00:a3:1f:af Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-244607 Clientid:01:52:54:00:a3:1f:af}
I0311 20:22:42.181538   27266 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined IP address 192.168.39.51 and MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:42.181758   27266 main.go:141] libmachine: (functional-244607) Calling .GetSSHPort
I0311 20:22:42.181926   27266 main.go:141] libmachine: (functional-244607) Calling .GetSSHKeyPath
I0311 20:22:42.182049   27266 main.go:141] libmachine: (functional-244607) Calling .GetSSHUsername
I0311 20:22:42.182214   27266 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/functional-244607/id_rsa Username:docker}
I0311 20:22:42.301029   27266 ssh_runner.go:195] Run: sudo crictl images --output json
I0311 20:22:42.400310   27266 main.go:141] libmachine: Making call to close driver server
I0311 20:22:42.400322   27266 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:42.400572   27266 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:42.400589   27266 main.go:141] libmachine: Making call to close connection to plugin binary
I0311 20:22:42.400620   27266 main.go:141] libmachine: Making call to close driver server
I0311 20:22:42.400636   27266 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:42.400862   27266 main.go:141] libmachine: (functional-244607) DBG | Closing plugin on server side
I0311 20:22:42.400889   27266 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:42.400896   27266 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 ssh pgrep buildkitd: exit status 1 (294.404318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image build -t localhost/my-image:functional-244607 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 image build -t localhost/my-image:functional-244607 testdata/build --alsologtostderr: (3.123062331s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244607 image build -t localhost/my-image:functional-244607 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 50c120fe56b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-244607
--> bc9b8ffbed1
Successfully tagged localhost/my-image:functional-244607
bc9b8ffbed12fe627f1fac0b42ab4531f46da99327a0d70693eae8f6f20b1287
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244607 image build -t localhost/my-image:functional-244607 testdata/build --alsologtostderr:
I0311 20:22:40.684880   27219 out.go:291] Setting OutFile to fd 1 ...
I0311 20:22:40.685157   27219 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:40.685168   27219 out.go:304] Setting ErrFile to fd 2...
I0311 20:22:40.685172   27219 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 20:22:40.685398   27219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
I0311 20:22:40.685995   27219 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:40.686668   27219 config.go:182] Loaded profile config "functional-244607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0311 20:22:40.687063   27219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:40.687126   27219 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:40.701385   27219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
I0311 20:22:40.701860   27219 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:40.702396   27219 main.go:141] libmachine: Using API Version  1
I0311 20:22:40.702419   27219 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:40.702725   27219 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:40.702906   27219 main.go:141] libmachine: (functional-244607) Calling .GetState
I0311 20:22:40.704548   27219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0311 20:22:40.704591   27219 main.go:141] libmachine: Launching plugin server for driver kvm2
I0311 20:22:40.719078   27219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
I0311 20:22:40.719454   27219 main.go:141] libmachine: () Calling .GetVersion
I0311 20:22:40.720026   27219 main.go:141] libmachine: Using API Version  1
I0311 20:22:40.720062   27219 main.go:141] libmachine: () Calling .SetConfigRaw
I0311 20:22:40.720385   27219 main.go:141] libmachine: () Calling .GetMachineName
I0311 20:22:40.720609   27219 main.go:141] libmachine: (functional-244607) Calling .DriverName
I0311 20:22:40.720847   27219 ssh_runner.go:195] Run: systemctl --version
I0311 20:22:40.720875   27219 main.go:141] libmachine: (functional-244607) Calling .GetSSHHostname
I0311 20:22:40.723837   27219 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:40.724355   27219 main.go:141] libmachine: (functional-244607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:1f:af", ip: ""} in network mk-functional-244607: {Iface:virbr1 ExpiryTime:2024-03-11 21:19:41 +0000 UTC Type:0 Mac:52:54:00:a3:1f:af Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:functional-244607 Clientid:01:52:54:00:a3:1f:af}
I0311 20:22:40.724389   27219 main.go:141] libmachine: (functional-244607) DBG | domain functional-244607 has defined IP address 192.168.39.51 and MAC address 52:54:00:a3:1f:af in network mk-functional-244607
I0311 20:22:40.724520   27219 main.go:141] libmachine: (functional-244607) Calling .GetSSHPort
I0311 20:22:40.724675   27219 main.go:141] libmachine: (functional-244607) Calling .GetSSHKeyPath
I0311 20:22:40.724844   27219 main.go:141] libmachine: (functional-244607) Calling .GetSSHUsername
I0311 20:22:40.724976   27219 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/functional-244607/id_rsa Username:docker}
I0311 20:22:40.876569   27219 build_images.go:161] Building image from path: /tmp/build.2665715409.tar
I0311 20:22:40.876634   27219 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0311 20:22:40.904088   27219 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2665715409.tar
I0311 20:22:40.923141   27219 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2665715409.tar: stat -c "%s %y" /var/lib/minikube/build/build.2665715409.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2665715409.tar': No such file or directory
I0311 20:22:40.923170   27219 ssh_runner.go:362] scp /tmp/build.2665715409.tar --> /var/lib/minikube/build/build.2665715409.tar (3072 bytes)
I0311 20:22:40.979638   27219 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2665715409
I0311 20:22:41.005402   27219 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2665715409 -xf /var/lib/minikube/build/build.2665715409.tar
I0311 20:22:41.028136   27219 crio.go:297] Building image: /var/lib/minikube/build/build.2665715409
I0311 20:22:41.028224   27219 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-244607 /var/lib/minikube/build/build.2665715409 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0311 20:22:43.723364   27219 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-244607 /var/lib/minikube/build/build.2665715409 --cgroup-manager=cgroupfs: (2.695115616s)
I0311 20:22:43.723413   27219 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2665715409
I0311 20:22:43.738050   27219 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2665715409.tar
I0311 20:22:43.751019   27219 build_images.go:217] Built localhost/my-image:functional-244607 from /tmp/build.2665715409.tar
I0311 20:22:43.751045   27219 build_images.go:133] succeeded building to: functional-244607
I0311 20:22:43.751049   27219 build_images.go:134] failed building to: 
I0311 20:22:43.751074   27219 main.go:141] libmachine: Making call to close driver server
I0311 20:22:43.751085   27219 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:43.751356   27219 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:43.751375   27219 main.go:141] libmachine: Making call to close connection to plugin binary
I0311 20:22:43.751386   27219 main.go:141] libmachine: (functional-244607) DBG | Closing plugin on server side
I0311 20:22:43.751420   27219 main.go:141] libmachine: Making call to close driver server
I0311 20:22:43.751434   27219 main.go:141] libmachine: (functional-244607) Calling .Close
I0311 20:22:43.751697   27219 main.go:141] libmachine: Successfully made call to close driver server
I0311 20:22:43.751726   27219 main.go:141] libmachine: (functional-244607) DBG | Closing plugin on server side
I0311 20:22:43.751755   27219 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-244607
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-244607 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-244607 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-456gr" [61a0efed-8783-45f6-b2dc-f19089d381bb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-456gr" [61a0efed-8783-45f6-b2dc-f19089d381bb] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.012640076s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image load --daemon gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 image load --daemon gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr: (5.137088672s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image load --daemon gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 image load --daemon gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr: (2.439508515s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-244607
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image load --daemon gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 image load --daemon gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr: (8.337858119s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 service list -o json
functional_test.go:1490: Took "371.515966ms" to run "out/minikube-linux-amd64 -p functional-244607 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.51:30460
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.51:30460
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdany-port666292836/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710188532263717418" to /tmp/TestFunctionalparallelMountCmdany-port666292836/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710188532263717418" to /tmp/TestFunctionalparallelMountCmdany-port666292836/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710188532263717418" to /tmp/TestFunctionalparallelMountCmdany-port666292836/001/test-1710188532263717418
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.480527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 11 20:22 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 11 20:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 11 20:22 test-1710188532263717418
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh cat /mount-9p/test-1710188532263717418
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-244607 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [71eb4004-8418-428b-bedb-073b705eb634] Pending
helpers_test.go:344: "busybox-mount" [71eb4004-8418-428b-bedb-073b705eb634] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [71eb4004-8418-428b-bedb-073b705eb634] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [71eb4004-8418-428b-bedb-073b705eb634] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 21.02538534s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-244607 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdany-port666292836/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "332.526137ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "60.468252ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "286.506491ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "64.142473ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image save gcr.io/google-containers/addon-resizer:functional-244607 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 image save gcr.io/google-containers/addon-resizer:functional-244607 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.737289565s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image rm gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-244607
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 image save --daemon gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-244607 image save --daemon gcr.io/google-containers/addon-resizer:functional-244607 --alsologtostderr: (3.626305557s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-244607
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdspecific-port747521024/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.666097ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdspecific-port747521024/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 ssh "sudo umount -f /mount-9p": exit status 1 (277.923624ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-244607 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdspecific-port747521024/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1602633338/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1602633338/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1602633338/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T" /mount1: exit status 1 (263.768095ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T" /mount1
E0311 20:22:38.935131   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:22:38.941171   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:22:38.951415   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:22:38.971680   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:22:39.012521   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:22:39.092880   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T" /mount2
E0311 20:22:39.253480   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244607 ssh "findmnt -T" /mount3
E0311 20:22:39.574701   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-244607 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1602633338/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1602633338/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1602633338/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-244607
E0311 20:22:44.057456   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-244607
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-244607
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (243.19s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-834040 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0311 20:22:49.178172   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:22:59.418551   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:23:19.898750   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:24:00.859243   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:25:22.779744   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-834040 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m2.485920333s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (243.19s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (5.42s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-834040 -- rollout status deployment/busybox: (2.96230126s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-d62cw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-h9jx5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-mx5b4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-d62cw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-h9jx5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-mx5b4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-d62cw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-h9jx5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-mx5b4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (5.42s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-d62cw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-d62cw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-h9jx5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-h9jx5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-mx5b4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-834040 -- exec busybox-5b5d89c9d6-mx5b4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (48.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-834040 -v=7 --alsologtostderr
E0311 20:26:58.808551   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:26:58.813882   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:26:58.824117   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:26:58.844392   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:26:58.884787   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:26:58.965041   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:26:59.125461   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:26:59.445950   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:27:00.086420   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:27:01.367472   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:27:03.928429   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:27:09.048964   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:27:19.289671   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:27:38.935348   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:27:39.769827   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-834040 -v=7 --alsologtostderr: (47.993686911s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (48.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-834040 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (13.48s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp testdata/cp-test.txt ha-834040:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040:/home/docker/cp-test.txt ha-834040-m02:/home/docker/cp-test_ha-834040_ha-834040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m02 "sudo cat /home/docker/cp-test_ha-834040_ha-834040-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040:/home/docker/cp-test.txt ha-834040-m03:/home/docker/cp-test_ha-834040_ha-834040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m03 "sudo cat /home/docker/cp-test_ha-834040_ha-834040-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040:/home/docker/cp-test.txt ha-834040-m04:/home/docker/cp-test_ha-834040_ha-834040-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m04 "sudo cat /home/docker/cp-test_ha-834040_ha-834040-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp testdata/cp-test.txt ha-834040-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m02:/home/docker/cp-test.txt ha-834040:/home/docker/cp-test_ha-834040-m02_ha-834040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040 "sudo cat /home/docker/cp-test_ha-834040-m02_ha-834040.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m02:/home/docker/cp-test.txt ha-834040-m03:/home/docker/cp-test_ha-834040-m02_ha-834040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m03 "sudo cat /home/docker/cp-test_ha-834040-m02_ha-834040-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m02:/home/docker/cp-test.txt ha-834040-m04:/home/docker/cp-test_ha-834040-m02_ha-834040-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m04 "sudo cat /home/docker/cp-test_ha-834040-m02_ha-834040-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp testdata/cp-test.txt ha-834040-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt ha-834040:/home/docker/cp-test_ha-834040-m03_ha-834040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040 "sudo cat /home/docker/cp-test_ha-834040-m03_ha-834040.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt ha-834040-m02:/home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m02 "sudo cat /home/docker/cp-test_ha-834040-m03_ha-834040-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m03:/home/docker/cp-test.txt ha-834040-m04:/home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m04 "sudo cat /home/docker/cp-test_ha-834040-m03_ha-834040-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp testdata/cp-test.txt ha-834040-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile2017558617/001/cp-test_ha-834040-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt ha-834040:/home/docker/cp-test_ha-834040-m04_ha-834040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040 "sudo cat /home/docker/cp-test_ha-834040-m04_ha-834040.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt ha-834040-m02:/home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m02 "sudo cat /home/docker/cp-test_ha-834040-m04_ha-834040-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 cp ha-834040-m04:/home/docker/cp-test.txt ha-834040-m03:/home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 ssh -n ha-834040-m03 "sudo cat /home/docker/cp-test_ha-834040-m04_ha-834040-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (13.48s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.502769544s)
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (17.42s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-834040 node delete m03 -v=7 --alsologtostderr: (16.658900641s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (17.42s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (317.26s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-834040 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0311 20:41:58.809556   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
E0311 20:42:38.935681   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
E0311 20:43:21.852788   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-834040 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m16.44112972s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (317.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (79.01s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-834040 --control-plane -v=7 --alsologtostderr
E0311 20:46:58.808555   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-834040 --control-plane -v=7 --alsologtostderr: (1m18.149592026s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-834040 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (79.01s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (101.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-944986 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0311 20:47:38.935570   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-944986 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.974123011s)
--- PASS: TestJSONOutput/start/Command (101.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-944986 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-944986 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-944986 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-944986 --output=json --user=testUser: (7.363438767s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-482545 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-482545 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.329059ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c97d750-ecfc-47a3-951b-098e252a3c85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-482545] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"88e39ddc-f308-4f8b-93b2-c43ca709d045","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18358"}}
	{"specversion":"1.0","id":"cde8bbbd-f1ba-43ae-899f-77d852d4d029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ded0eee5-2cb5-450c-8c82-4cef408305c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig"}}
	{"specversion":"1.0","id":"024073cd-2df1-4e49-a94e-e5542b899ef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube"}}
	{"specversion":"1.0","id":"b3cd9118-1842-44a8-a808-1ade4b34988b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1c87c214-1a4c-470a-b621-99748199126c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7edbceef-1483-41b7-a5e2-4a3355e42c9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-482545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-482545
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-453772 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-453772 --driver=kvm2  --container-runtime=crio: (44.575026875s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-456792 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-456792 --driver=kvm2  --container-runtime=crio: (45.183176871s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-453772
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-456792
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-456792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-456792
helpers_test.go:175: Cleaning up "first-453772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-453772
--- PASS: TestMinikubeProfile (92.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-390324 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-390324 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.306784807s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-390324 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-390324 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-404021 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-404021 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.501649629s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-404021 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-404021 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-390324 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-404021 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-404021 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-404021
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-404021: (1.418017264s)
--- PASS: TestMountStart/serial/Stop (1.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-404021
E0311 20:51:58.807535   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-404021: (22.142318407s)
--- PASS: TestMountStart/serial/RestartStopped (23.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-404021 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-404021 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0311 20:52:38.935135   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-232100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.787270721s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-232100 -- rollout status deployment/busybox: (2.456370427s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-4hsnz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-8xhwm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-4hsnz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-8xhwm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-4hsnz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-8xhwm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-4hsnz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-4hsnz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-8xhwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-232100 -- exec busybox-5b5d89c9d6-8xhwm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-232100 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-232100 -v 3 --alsologtostderr: (40.475782571s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-232100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp testdata/cp-test.txt multinode-232100:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile149036959/001/cp-test_multinode-232100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100:/home/docker/cp-test.txt multinode-232100-m02:/home/docker/cp-test_multinode-232100_multinode-232100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m02 "sudo cat /home/docker/cp-test_multinode-232100_multinode-232100-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100:/home/docker/cp-test.txt multinode-232100-m03:/home/docker/cp-test_multinode-232100_multinode-232100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m03 "sudo cat /home/docker/cp-test_multinode-232100_multinode-232100-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp testdata/cp-test.txt multinode-232100-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile149036959/001/cp-test_multinode-232100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100-m02:/home/docker/cp-test.txt multinode-232100:/home/docker/cp-test_multinode-232100-m02_multinode-232100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100 "sudo cat /home/docker/cp-test_multinode-232100-m02_multinode-232100.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100-m02:/home/docker/cp-test.txt multinode-232100-m03:/home/docker/cp-test_multinode-232100-m02_multinode-232100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m03 "sudo cat /home/docker/cp-test_multinode-232100-m02_multinode-232100-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp testdata/cp-test.txt multinode-232100-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile149036959/001/cp-test_multinode-232100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt multinode-232100:/home/docker/cp-test_multinode-232100-m03_multinode-232100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100 "sudo cat /home/docker/cp-test_multinode-232100-m03_multinode-232100.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 cp multinode-232100-m03:/home/docker/cp-test.txt multinode-232100-m02:/home/docker/cp-test_multinode-232100-m03_multinode-232100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 ssh -n multinode-232100-m02 "sudo cat /home/docker/cp-test_multinode-232100-m03_multinode-232100-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-232100 node stop m03: (2.292921264s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-232100 status: exit status 7 (429.962086ms)

                                                
                                                
-- stdout --
	multinode-232100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-232100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-232100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-232100 status --alsologtostderr: exit status 7 (444.149286ms)

                                                
                                                
-- stdout --
	multinode-232100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-232100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-232100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 20:54:40.503900   42551 out.go:291] Setting OutFile to fd 1 ...
	I0311 20:54:40.504008   42551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:54:40.504018   42551 out.go:304] Setting ErrFile to fd 2...
	I0311 20:54:40.504022   42551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 20:54:40.504180   42551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 20:54:40.504352   42551 out.go:298] Setting JSON to false
	I0311 20:54:40.504377   42551 mustload.go:65] Loading cluster: multinode-232100
	I0311 20:54:40.504500   42551 notify.go:220] Checking for updates...
	I0311 20:54:40.504715   42551 config.go:182] Loaded profile config "multinode-232100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 20:54:40.504727   42551 status.go:255] checking status of multinode-232100 ...
	I0311 20:54:40.505074   42551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:54:40.505122   42551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:54:40.524385   42551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39831
	I0311 20:54:40.524846   42551 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:54:40.525500   42551 main.go:141] libmachine: Using API Version  1
	I0311 20:54:40.525524   42551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:54:40.525839   42551 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:54:40.526042   42551 main.go:141] libmachine: (multinode-232100) Calling .GetState
	I0311 20:54:40.527755   42551 status.go:330] multinode-232100 host status = "Running" (err=<nil>)
	I0311 20:54:40.527773   42551 host.go:66] Checking if "multinode-232100" exists ...
	I0311 20:54:40.528016   42551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:54:40.528061   42551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:54:40.542225   42551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I0311 20:54:40.542544   42551 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:54:40.542918   42551 main.go:141] libmachine: Using API Version  1
	I0311 20:54:40.542938   42551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:54:40.543232   42551 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:54:40.543426   42551 main.go:141] libmachine: (multinode-232100) Calling .GetIP
	I0311 20:54:40.546077   42551 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:54:40.546451   42551 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:54:40.546489   42551 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:54:40.546609   42551 host.go:66] Checking if "multinode-232100" exists ...
	I0311 20:54:40.546864   42551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:54:40.546898   42551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:54:40.560860   42551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0311 20:54:40.561210   42551 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:54:40.561604   42551 main.go:141] libmachine: Using API Version  1
	I0311 20:54:40.561620   42551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:54:40.561909   42551 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:54:40.562077   42551 main.go:141] libmachine: (multinode-232100) Calling .DriverName
	I0311 20:54:40.562230   42551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:54:40.562256   42551 main.go:141] libmachine: (multinode-232100) Calling .GetSSHHostname
	I0311 20:54:40.564664   42551 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:54:40.565020   42551 main.go:141] libmachine: (multinode-232100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:35:9e", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:52:17 +0000 UTC Type:0 Mac:52:54:00:e5:35:9e Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-232100 Clientid:01:52:54:00:e5:35:9e}
	I0311 20:54:40.565050   42551 main.go:141] libmachine: (multinode-232100) DBG | domain multinode-232100 has defined IP address 192.168.39.134 and MAC address 52:54:00:e5:35:9e in network mk-multinode-232100
	I0311 20:54:40.565183   42551 main.go:141] libmachine: (multinode-232100) Calling .GetSSHPort
	I0311 20:54:40.565354   42551 main.go:141] libmachine: (multinode-232100) Calling .GetSSHKeyPath
	I0311 20:54:40.565482   42551 main.go:141] libmachine: (multinode-232100) Calling .GetSSHUsername
	I0311 20:54:40.565628   42551 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100/id_rsa Username:docker}
	I0311 20:54:40.649366   42551 ssh_runner.go:195] Run: systemctl --version
	I0311 20:54:40.656286   42551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:54:40.675057   42551 kubeconfig.go:125] found "multinode-232100" server: "https://192.168.39.134:8443"
	I0311 20:54:40.675079   42551 api_server.go:166] Checking apiserver status ...
	I0311 20:54:40.675123   42551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 20:54:40.699654   42551 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1118/cgroup
	W0311 20:54:40.712856   42551 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1118/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0311 20:54:40.712906   42551 ssh_runner.go:195] Run: ls
	I0311 20:54:40.717794   42551 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0311 20:54:40.722156   42551 api_server.go:279] https://192.168.39.134:8443/healthz returned 200:
	ok
	I0311 20:54:40.722175   42551 status.go:422] multinode-232100 apiserver status = Running (err=<nil>)
	I0311 20:54:40.722186   42551 status.go:257] multinode-232100 status: &{Name:multinode-232100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:54:40.722225   42551 status.go:255] checking status of multinode-232100-m02 ...
	I0311 20:54:40.722491   42551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:54:40.722531   42551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:54:40.737236   42551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I0311 20:54:40.737632   42551 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:54:40.738098   42551 main.go:141] libmachine: Using API Version  1
	I0311 20:54:40.738117   42551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:54:40.738477   42551 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:54:40.738647   42551 main.go:141] libmachine: (multinode-232100-m02) Calling .GetState
	I0311 20:54:40.740240   42551 status.go:330] multinode-232100-m02 host status = "Running" (err=<nil>)
	I0311 20:54:40.740256   42551 host.go:66] Checking if "multinode-232100-m02" exists ...
	I0311 20:54:40.740519   42551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:54:40.740567   42551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:54:40.754816   42551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0311 20:54:40.755203   42551 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:54:40.755577   42551 main.go:141] libmachine: Using API Version  1
	I0311 20:54:40.755601   42551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:54:40.755924   42551 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:54:40.756075   42551 main.go:141] libmachine: (multinode-232100-m02) Calling .GetIP
	I0311 20:54:40.758769   42551 main.go:141] libmachine: (multinode-232100-m02) DBG | domain multinode-232100-m02 has defined MAC address 52:54:00:a4:17:43 in network mk-multinode-232100
	I0311 20:54:40.759140   42551 main.go:141] libmachine: (multinode-232100-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:17:43", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:53:20 +0000 UTC Type:0 Mac:52:54:00:a4:17:43 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:multinode-232100-m02 Clientid:01:52:54:00:a4:17:43}
	I0311 20:54:40.759164   42551 main.go:141] libmachine: (multinode-232100-m02) DBG | domain multinode-232100-m02 has defined IP address 192.168.39.4 and MAC address 52:54:00:a4:17:43 in network mk-multinode-232100
	I0311 20:54:40.759318   42551 host.go:66] Checking if "multinode-232100-m02" exists ...
	I0311 20:54:40.759620   42551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:54:40.759651   42551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:54:40.773631   42551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0311 20:54:40.774008   42551 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:54:40.774464   42551 main.go:141] libmachine: Using API Version  1
	I0311 20:54:40.774483   42551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:54:40.774798   42551 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:54:40.774976   42551 main.go:141] libmachine: (multinode-232100-m02) Calling .DriverName
	I0311 20:54:40.775131   42551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 20:54:40.775152   42551 main.go:141] libmachine: (multinode-232100-m02) Calling .GetSSHHostname
	I0311 20:54:40.777422   42551 main.go:141] libmachine: (multinode-232100-m02) DBG | domain multinode-232100-m02 has defined MAC address 52:54:00:a4:17:43 in network mk-multinode-232100
	I0311 20:54:40.777764   42551 main.go:141] libmachine: (multinode-232100-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:17:43", ip: ""} in network mk-multinode-232100: {Iface:virbr1 ExpiryTime:2024-03-11 21:53:20 +0000 UTC Type:0 Mac:52:54:00:a4:17:43 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:multinode-232100-m02 Clientid:01:52:54:00:a4:17:43}
	I0311 20:54:40.777790   42551 main.go:141] libmachine: (multinode-232100-m02) DBG | domain multinode-232100-m02 has defined IP address 192.168.39.4 and MAC address 52:54:00:a4:17:43 in network mk-multinode-232100
	I0311 20:54:40.777876   42551 main.go:141] libmachine: (multinode-232100-m02) Calling .GetSSHPort
	I0311 20:54:40.778024   42551 main.go:141] libmachine: (multinode-232100-m02) Calling .GetSSHKeyPath
	I0311 20:54:40.778124   42551 main.go:141] libmachine: (multinode-232100-m02) Calling .GetSSHUsername
	I0311 20:54:40.778242   42551 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18358-11004/.minikube/machines/multinode-232100-m02/id_rsa Username:docker}
	I0311 20:54:40.856613   42551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 20:54:40.873604   42551 status.go:257] multinode-232100-m02 status: &{Name:multinode-232100-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0311 20:54:40.873637   42551 status.go:255] checking status of multinode-232100-m03 ...
	I0311 20:54:40.874000   42551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0311 20:54:40.874042   42551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0311 20:54:40.888750   42551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39343
	I0311 20:54:40.889165   42551 main.go:141] libmachine: () Calling .GetVersion
	I0311 20:54:40.889634   42551 main.go:141] libmachine: Using API Version  1
	I0311 20:54:40.889657   42551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0311 20:54:40.889951   42551 main.go:141] libmachine: () Calling .GetMachineName
	I0311 20:54:40.890129   42551 main.go:141] libmachine: (multinode-232100-m03) Calling .GetState
	I0311 20:54:40.891715   42551 status.go:330] multinode-232100-m03 host status = "Stopped" (err=<nil>)
	I0311 20:54:40.891726   42551 status.go:343] host is not running, skipping remaining checks
	I0311 20:54:40.891731   42551 status.go:257] multinode-232100-m03 status: &{Name:multinode-232100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-232100 node start m03 -v=7 --alsologtostderr: (28.323255605s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-232100 node delete m03: (2.00453597s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (170.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232100 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-232100 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m49.488204234s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-232100 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (170.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-232100
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232100-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-232100-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.291729ms)

                                                
                                                
-- stdout --
	* [multinode-232100-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-232100-m02' is duplicated with machine name 'multinode-232100-m02' in profile 'multinode-232100'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-232100-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-232100-m03 --driver=kvm2  --container-runtime=crio: (46.117819902s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-232100
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-232100: exit status 80 (220.466366ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-232100 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-232100-m03 already exists in multinode-232100-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-232100-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.45s)

                                                
                                    
x
+
TestScheduledStopUnix (117.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-234803 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-234803 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.246694941s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-234803 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-234803 -n scheduled-stop-234803
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-234803 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-234803 --cancel-scheduled
E0311 21:11:58.809530   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-234803 -n scheduled-stop-234803
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-234803
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-234803 --schedule 15s
E0311 21:12:21.983500   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0311 21:12:38.935967   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-234803
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-234803: exit status 7 (74.364648ms)

                                                
                                                
-- stdout --
	scheduled-stop-234803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-234803 -n scheduled-stop-234803
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-234803 -n scheduled-stop-234803: exit status 7 (74.529337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-234803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-234803
--- PASS: TestScheduledStopUnix (117.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (149.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1309127164 start -p running-upgrade-169709 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1309127164 start -p running-upgrade-169709 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (57.412590727s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-169709 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-169709 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.888866157s)
helpers_test.go:175: Cleaning up "running-upgrade-169709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-169709
--- PASS: TestRunningBinaryUpgrade (149.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-427678 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-427678 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (113.173381ms)

                                                
                                                
-- stdout --
	* [false-427678] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 21:13:06.901000   48797 out.go:291] Setting OutFile to fd 1 ...
	I0311 21:13:06.901243   48797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:13:06.901253   48797 out.go:304] Setting ErrFile to fd 2...
	I0311 21:13:06.901263   48797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 21:13:06.901437   48797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-11004/.minikube/bin
	I0311 21:13:06.901947   48797 out.go:298] Setting JSON to false
	I0311 21:13:06.902762   48797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6936,"bootTime":1710184651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0311 21:13:06.902813   48797 start.go:139] virtualization: kvm guest
	I0311 21:13:06.904954   48797 out.go:177] * [false-427678] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0311 21:13:06.906323   48797 notify.go:220] Checking for updates...
	I0311 21:13:06.906329   48797 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 21:13:06.907669   48797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 21:13:06.908978   48797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	I0311 21:13:06.910462   48797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	I0311 21:13:06.911836   48797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0311 21:13:06.913254   48797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 21:13:06.914983   48797 config.go:182] Loaded profile config "force-systemd-flag-193340": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:13:06.915095   48797 config.go:182] Loaded profile config "kubernetes-upgrade-171195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0311 21:13:06.915202   48797 config.go:182] Loaded profile config "offline-crio-153995": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0311 21:13:06.915328   48797 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 21:13:06.950620   48797 out.go:177] * Using the kvm2 driver based on user configuration
	I0311 21:13:06.951791   48797 start.go:297] selected driver: kvm2
	I0311 21:13:06.951803   48797 start.go:901] validating driver "kvm2" against <nil>
	I0311 21:13:06.951813   48797 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 21:13:06.953628   48797 out.go:177] 
	W0311 21:13:06.954777   48797 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0311 21:13:06.956041   48797 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-427678 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-427678" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-427678

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-427678"

                                                
                                                
----------------------- debugLogs end: false-427678 [took: 2.947596692s] --------------------------------
helpers_test.go:175: Cleaning up "false-427678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-427678
--- PASS: TestNetworkPlugins/group/false (3.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (193.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2754860400 start -p stopped-upgrade-890519 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2754860400 start -p stopped-upgrade-890519 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m56.713472433s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2754860400 -p stopped-upgrade-890519 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2754860400 -p stopped-upgrade-890519 stop: (2.132500005s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-890519 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-890519 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.69312144s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (193.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-890519
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-890519: (1.005713352s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestPause/serial/Start (83.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-717098 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-717098 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m23.788241666s)
--- PASS: TestPause/serial/Start (83.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-364658 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-364658 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (74.590254ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-364658] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-11004/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-11004/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (73.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-364658 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-364658 --driver=kvm2  --container-runtime=crio: (1m13.230213999s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-364658 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (73.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-364658 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-364658 --no-kubernetes --driver=kvm2  --container-runtime=crio: (5.09611346s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-364658 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-364658 status -o json: exit status 2 (394.806941ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-364658","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-364658
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-364658: (1.258983917s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-364658 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-364658 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.259661694s)
--- PASS: TestNoKubernetes/serial/Start (27.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (119.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m59.71098612s)
--- PASS: TestNetworkPlugins/group/auto/Start (119.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m38.331020655s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-364658 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-364658 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.465007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.4391088s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-364658
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-364658: (2.581932138s)
--- PASS: TestNoKubernetes/serial/Stop (2.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (71.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-364658 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-364658 --driver=kvm2  --container-runtime=crio: (1m11.495210113s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (71.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (161.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m41.96841085s)
--- PASS: TestNetworkPlugins/group/calico/Start (161.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-364658 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-364658 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.277554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (107.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m47.541285867s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (107.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qmrmt" [64e2eb0d-b542-4877-aaf8-21b42f2c4b8c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007012236s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-427678 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-427678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mxfr5" [ed556059-9ab5-4b68-95b0-44acfaf29a6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mxfr5" [ed556059-9ab5-4b68-95b0-44acfaf29a6d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005512122s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-427678 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-427678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pf4vx" [1434b857-6c2b-470f-9114-8b1024f3f1d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pf4vx" [1434b857-6c2b-470f-9114-8b1024f3f1d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.006303946s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-427678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-427678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (112.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0311 21:21:58.808121   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/functional-244607/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m52.074049382s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (112.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (109.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m49.829339186s)
--- PASS: TestNetworkPlugins/group/flannel/Start (109.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nmjsm" [b87c4c28-8f6d-4950-857a-0273754ed111] Running
E0311 21:22:38.935784   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/addons-118179/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006597961s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-427678 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-427678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8mmc9" [30c239e6-e5b0-45bb-8a82-08a11ba280aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8mmc9" [30c239e6-e5b0-45bb-8a82-08a11ba280aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005339396s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-427678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-427678 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-427678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-99mbz" [dd4772de-76cb-4499-a600-e96f3ccf2dc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-99mbz" [dd4772de-76cb-4499-a600-e96f3ccf2dc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006397725s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-427678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-427678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m37.099865161s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-427678 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-427678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-427678 replace --force -f testdata/netcat-deployment.yaml: (1.434979538s)
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4rtmh" [4d09267f-7aeb-4aec-ac09-4afe590d9b7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4rtmh" [4d09267f-7aeb-4aec-ac09-4afe590d9b7c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005210281s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2ddhv" [5f1e6960-a100-41a9-9555-b934126a42dc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005839962s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-427678 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-427678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bbrrh" [95cc9575-90c2-4158-8338-9bcd8051dcc0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bbrrh" [95cc9575-90c2-4158-8338-9bcd8051dcc0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.007695305s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-427678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-427678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (122.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-324578 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-324578 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m2.10048414s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (122.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (129.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-743937 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-743937 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m9.27409203s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (129.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-427678 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-427678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v7wsb" [fb0a3a8f-053f-4926-93fa-ed7eb379320f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v7wsb" [fb0a3a8f-053f-4926-93fa-ed7eb379320f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.005479489s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-427678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-427678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-766430 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-766430 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m2.194244266s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-766430 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec389764-c294-4a41-95fa-1a2f7491b3f0] Pending
helpers_test.go:344: "busybox" [ec389764-c294-4a41-95fa-1a2f7491b3f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0311 21:26:24.554230   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ec389764-c294-4a41-95fa-1a2f7491b3f0] Running
E0311 21:26:26.476147   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005055969s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-766430 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-324578 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f0775042-3ac4-4743-a85a-3df42267a6e6] Pending
E0311 21:26:23.916528   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:26:23.921841   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:26:23.932125   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:26:23.952421   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:26:23.992694   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:26:24.073011   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:26:24.233427   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f0775042-3ac4-4743-a85a-3df42267a6e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0311 21:26:25.195117   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f0775042-3ac4-4743-a85a-3df42267a6e6] Running
E0311 21:26:28.681669   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:28.686942   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:28.697202   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:28.717507   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:28.757801   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:28.838264   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:28.998758   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:29.037004   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
E0311 21:26:29.319434   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:29.960545   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
E0311 21:26:31.240730   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004760805s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-324578 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-766430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-766430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.123212556s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-766430 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-324578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0311 21:26:33.801681   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-324578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.015766935s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-324578 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-743937 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c671444e-966b-49a8-a879-eca251041b29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0311 21:26:38.921941   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/auto-427678/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c671444e-966b-49a8-a879-eca251041b29] Running
E0311 21:26:44.399107   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/kindnet-427678/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004421051s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-743937 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-743937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-743937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.093433517s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-743937 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (690.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-766430 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-766430 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (11m30.469134369s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-766430 -n default-k8s-diff-port-766430
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (690.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (591.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-324578 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-324578 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m50.896593459s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-324578 -n no-preload-324578
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (591.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (634.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-743937 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0311 21:29:32.571221   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:29:32.606383   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
E0311 21:29:51.177414   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:51.182692   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:51.192952   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:51.213223   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:51.253529   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:51.334053   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:51.494504   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:51.815264   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:52.456420   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:53.737259   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
E0311 21:29:56.298403   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-743937 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m33.790432506s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743937 -n embed-certs-743937
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (634.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-239315 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-239315 --alsologtostderr -v=3: (3.298989124s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239315 -n old-k8s-version-239315: exit status 7 (74.825048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-239315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-649653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0311 21:53:51.607901   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/enable-default-cni-427678/client.crt: no such file or directory
E0311 21:53:51.644093   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/flannel-427678/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-649653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (58.285058475s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-649653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-649653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.295311136s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-649653 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-649653 --alsologtostderr -v=3: (10.656082471s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-649653 -n newest-cni-649653
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-649653 -n newest-cni-649653: exit status 7 (76.882459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-649653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-649653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0311 21:54:51.177316   18235 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-11004/.minikube/profiles/bridge-427678/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-649653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (38.10034094s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-649653 -n newest-cni-649653
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-649653 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-649653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-649653 -n newest-cni-649653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-649653 -n newest-cni-649653: exit status 2 (251.432763ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-649653 -n newest-cni-649653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-649653 -n newest-cni-649653: exit status 2 (248.544423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-649653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-649653 -n newest-cni-649653
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-649653 -n newest-cni-649653
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    

Test skip (39/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
260 TestNetworkPlugins/group/kubenet 3.21
268 TestNetworkPlugins/group/cilium 3.47
274 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-427678 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-427678" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-427678

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-427678"

                                                
                                                
----------------------- debugLogs end: kubenet-427678 [took: 3.059636697s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-427678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-427678
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-427678 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-427678" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-427678

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-427678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-427678"

                                                
                                                
----------------------- debugLogs end: cilium-427678 [took: 3.320832011s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-427678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-427678
--- SKIP: TestNetworkPlugins/group/cilium (3.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-124446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-124446
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard